Prosecution Insights
Last updated: April 18, 2026
Application No. 18/541,424

METHOD AND APPARATUS WITH SUPER-SAMPLING

Final Rejection §103
Filed
Dec 15, 2023
Examiner
RHIM, WOO CHUL
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
112 granted / 140 resolved
+18.0% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
168
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Submission dated 02/25/2026 amends claims 1, 3, 7, 9-14, and 17-19. Claims 1-19 are pending. In view of the amendments to the claims, the previously set froth claim objections and claim rejections under 35 U.S.C. 112(b) have been withdrawn. Response to Arguments Applicant's arguments with respect to Janis as previously applied have been fully considered but they are not persuasive. On pages 8-9 of the submission, the applicant argues Janis as applied does not teach merging two images and then providing the merged image to the neural network because it provides only the current image frame as an input to the neural network. The applicant disagrees because, for example, pars. 74 and 75 of Janis teaches providing to the neural network, in addition to the current image frame, a prior high resolution image that is warped to the corresponding pixel locations in the current image frame. As such, the examiner finds the applicant’s argument unpersuasive. On page 9, the applicant argues that Janis as applied does not teach a neural network outputting a high resolution image because neural network of Janis outputs weights and not the high resolution image. The examiner disagrees. The examiner points out that the independent claims do not explicitly recite that the neural network directly outputs the second super-sampled image frame. Instead, the claim merely recites that the second super-sampled image frame is generated by performing a super-sampling operation. Indeed, the cited paragraphs of Janis do teach outputting a high resolution image by performing a super sampling operation (see, e.g., pars. 75-78 of Janis, which teach obtaining a final HR color image corresponding to the current LR image using a result of a deep learning (DL) based generator). As such, the examiner finds the argument unpersuasive. With respect to the arguments about the amended limitations, they are found moot because the amended limitations change the scope of the claims and hence, are addressed in the new 103 rejections below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 11-14 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Us patent application publication no. 2023/0206394 to Janis in view of us patent application publication no. 2022/0004798 to Zhu et al. (hereinafter Zhu). For claim 1, Janis as applied teaches a processor-implemented method of a processor (see, e.g., FIGS. 2-3), the method comprising: generating, by a first processor, a merged image by merging a first super-sampled image frame, having been generated at a first time point, with a second input image frame corresponding to a super-sampling target for a second time point (see, e.g., pars. 45-46, 71 and 75 and FIGS. 2 and 3, which teach mapping/warping a prior/historical high resolution (HR) image with a current low resolution (LR) image so that the images can be blended for the first time point); and generating a second super-sampled image frame by performing a super-sampling operation at the second time point (see, e.g., pars. 75-78, which teach obtaining a final HR color image corresponding to the current LR image using a result of a deep learning (DL) based generator). Janis as applied does not explicitly teach “increasing, by the first processor, a bit-precision of a result of an executing, by a second processor, of a super-sampling neural network model provided with a decreased bit precision of the merged image.” In the analogous art, Zhu as applied teaches using one processor to perform a bit-depth reduction on an input and a bit-depth amplification on an output of an image enhancing neural network, and using another processor to generate the network output by executing the image enhancing neural network (see, e.g., pars. 43-44, 55-56 and 58-63 and FIGS. 1, 2 and 12 of Zhu). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janis to perform the bit-depth reduction and amplification on the input and output of a neural network, which is executed by another processor as taught by Zhu because Janis suggests lowering precision of data for optimizing the performance (see, e.g., par. 87 of Janis) and the modification would provide predictable results of reducing the processing load of the neural network while minimizing the distortion (see pars. 36-37 of Zhu and MPEP 2143(I)(D)) and also provide an appropriate bit depth for the neural network (see pars. 47 and 61 of Zhu). For claim 2, Janis in view of Zhu teaches that the merging comprises: generating the merged image by mixing pixels of the first super-sampled image frame and pixels of the second input image frame based on determined change data (see, e.g., pars. 75-76 and 79 of Janis, which teach generating the blended image by blending pixels of the prior HR image and pixels of the current LR image based on the motion vector data between the current and prior images); and determining the change data corresponding to a change between the second input image frame and a first input image frame corresponding to a super-sampling target at the first time point (see, e.g., pars. 75-76 and 79 of Janis, which teach determining the motion vector data across the images, including the current and prior images). For claim 3, Janis in view of Zhu teaches that the generating of the merged image comprises: applying a corresponding pixel of the second input image frame to a position satisfying a replacement condition among pixel positions of the merged image (see, e.g., pars. 76-78 of Janis, which teach favoring the weighting factor of a pixel of the current image when the difference between the corresponding pixels is significant; see also pars. 49-70 of Janis for more details on the calculation) ; and warping and applying a corresponding pixel of the first super-sampled image frame to a position violating the replacement condition among the pixel positions of the merged image using the change data (see, e.g., pars. 76-78 of Janis, which teach favoring the weighting factor of a pixel of the prior image when the difference between the corresponding pixels is not significant; see also pars. 49-70 for more details on the calculation). For claim 11, Janis in view of Zhu teaches that the second super-sampled image frame has higher quality than the second input image frame (see, e.g., FIG. 2 of Janis, which shows generating a current HR output image from the current LR rendered image). For claim 12, Janis as applied teaches an electronic device, comprising: a first processor (see, e.g., FIG. 3 of Janis) configured to: merge a first super-sampled image frame, having been generated at a first time point, with a second input image frame corresponding to a super-sampling target for a second time point to generate a merged image (see, e.g., pars. 45-46, 71 and 75 and FIGS. 2 and 3 of Janis, which teach mapping a prior/historical high resolution (HR) image with a current low resolution (LR) image so that the images can be blended); and wherein the first processor is further configured to generate a second super-sampled image frame by performing a super-sampling operation at the second time point (see, e.g., pars. 75-78 of Janis, which teach obtaining a final HR color image corresponding to the current LR image using a result of a deep learning (DL) based generator). Janis as applied does not explicitly teach that the first processor determining network input data by decreasing a bit-precision of an image and increasing a bit-precision of network output data and a second processor generating a network output data by executing a super-sampling neural network model based on the network input data. In the analogous art, Zhu as applied teaches using one processor to perform a bit-depth reduction on an input and a bit-depth amplification on an output of an image enhancing neural network, and using another processor to generate the network output by executing the image enhancing neural network (see, e.g., pars. 43-44, 55-56 and 58-63 and FIGS. 1, 2 and 12 of Zhu). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janis to perform the bit-depth reduction and amplification on the input and output of a neural network, which is executed by another processor as taught by Zhu because Janis suggests lowering precision of data for optimizing the performance (see, e.g., par. 87 of Janis) and the modification would provide predictable results of reducing the processing load of the neural network while minimizing the distortion (see pars. 36-37 of Zhu and MPEP 2143(I)(D)) and also provide an appropriate bit depth for the neural network (see pars. 47 and 61 of Zhu). For claim 13, Janis in view of Zhu teaches that, to generate the merged image, the first processor is further configured to: generate the merged image by mixing pixels of the first super-sampled image frame and pixels of the second input image frame based on determined change data (see, e.g., pars. 75-76 and 79 of Janis, which teach generating the blended image by blending pixels of the prior HR image and pixels of the current LR image based on the motion vector data between the current and prior images);and determine the change data corresponding to a change between the second input image frame and a first input image frame corresponding to a super-sampling target at the first time point (see, e.g., pars. 75-76 and 79 of Janis, which teach determining the motion vector data across the images, including the current and prior images). For claim 14, Janis in view of Zhu teaches that the first processor is further configured to: apply a corresponding pixel of the second input image frame to a position satisfying a replacement condition among pixel positions of the merged image (see, e.g., pars. 76-78 of Janis, which teach favoring the weighting factor of a pixel of the current image when the difference between the corresponding pixels is significant; see also pars. 49-70 for more details on the calculation), and warp and apply a corresponding pixel of the first super-sampled image frame to a position violating the replacement condition among the pixel positions of the merged image using the change data (see, e.g., pars. 76-78 of Janis, which teach favoring the weighting factor of a pixel of the prior image when the difference between the corresponding pixels is not significant; see also pars. 49-70 of Janis for more details on the calculation). For claim 18, Janis as applied teaches a processor-implemented method (see, e.g., FIGS. 2-3), the method comprising: generating, by a first processor, a merged image by merging a first super-sampling image result at a first time point with a second image target at a second time point (see, e.g., pars. 45-46, 71 and 75 and FIGS. 2 and 3 of Janis, which teach mapping a prior/historical high resolution (HR) image with a current low resolution (LR) image so that the images can be blended); and generating a super-sampled second output image at the second time point (see, e.g., pars. 75-78 of Janis, which teach obtaining a final HR color image corresponding to the current LR image using a result of a deep learning (DL) based generator). Janis as applied does not explicitly teach “increasing, by the first processor, a bit-precision provided a super-sampling neural network executed by a second processor provided with a decreased bit precision image of the merged image.” In the analogous art, Zhu as applied teaches using one processor to perform a bit-depth reduction on an input and a bit-depth amplification on an output of an image enhancing neural network, and using another processor to generate the network output by executing the image enhancing neural network (see, e.g., pars. 43-44, 55-56 and 58-63 and FIGS. 1, 2 and 12 of Zhu). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janis to perform the bit-depth reduction and amplification on the input and output of a neural network, which is executed by another processor as taught by Zhu because Janis suggests lowering precision of data for optimizing the performance (see, e.g., par. 87 of Janis) and the modification would provide predictable results of reducing the processing load of the neural network while minimizing the distortion (see pars. 36-37 of Zhu and MPEP 2143(I)(D)) and also provide an appropriate bit depth for the neural network (see pars. 47 and 61 of Zhu). For claim 19, Janis in view of Zhu teaches that the merging comprises: determining change data corresponding to a change between the second image target and a super-sampling target at the first time point (see, e.g., pars. 75-76 and 79 of Janis, which teach determining the motion vector data across the images, including the current and prior images); and mixing pixels of the first super-sampled output image and pixels of the second image target based on the change data to determine the merged image (see, e.g., pars. 75-76 and 79 of Janis, which teach generating the blended image by blending pixels of the prior HR image and pixels of the current LR image based on the motion vector data between the current and prior images). Claim(s) 4 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Janis in view of Zhu and further in view of us patent application publication no. 2020/0394772 to Afra. For claim 4, Janis as applied does not explicitly teach “generating a second temporary image having lower bit-precision than the first temporary image by performing data type conversion on the first temporary image.” In the analogous art, Zhu as applied teaches using one processor to perform a bit-depth reduction on an input of an image enhancing neural network, and using another processor to generate the network output by executing the image enhancing neural network (see, e.g., pars. 43-44, 55-56 and 58-63 and FIGS. 1, 2 and 12 of Zhu). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janis to perform the bit-depth reduction on the input of a neural network as taught by Zhu because Janis suggests lowering precision of data for optimizing the performance (see, e.g., par. 87 of Janis) and the modification would provide predictable results of reducing the processing load of the neural network while minimizing the distortion (see pars. 36-37 of Zhu and MPEP 2143(I)(D)) and also provide an appropriate bit depth for the neural network (see pars. 47 and 61 of Zhu). Janis in view of Zhu does not explicitly teach “generating a first temporary image having a narrower dynamic range than the merged image by performing tone mapping on the merged image.” Afra in the analogous art teaches generating a first temporary image having a narrower dynamic range than the merged image by performing tone mapping on the merged image (see, e.g., pars. 230 and 236-245 and FIGS. 19-20 of Afra, which teach performing tone mapping on input image of a neural network, causing the color values of the image to have a low dynamic range). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janis in view of Zhu to use tone mapping as taught by Afra because doing so would make processing the input image less challenging that high dynamic range images (see, e.g., par. 230 of Afra). For claim 15, while Janis as applied does not teach, Afra in the analogous art teaches that the first processor is further configured to: For claim 15, Janis as applied does not explicitly teach a first processor configured to “generate a second temporary image having lower bit-precision than the first temporary image by performing data type conversion on the first temporary image.” In the analogous art, Zhu as applied teaches using one processor to perform a bit-depth reduction on an input of an image enhancing neural network, and using another processor to generate the network output by executing the image enhancing neural network (see, e.g., pars. 43-44, 55-56 and 58-63 and FIGS. 1, 2 and 12 of Zhu). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janis to perform the bit-depth reduction on the input of a neural network as taught by Zhu because Janis suggests lowering precision of data for optimizing the performance (see, e.g., par. 87 of Janis) and the modification would provide predictable results of reducing the processing load of the neural network while minimizing the distortion (see pars. 36-37 of Zhu and MPEP 2143(I)(D)) and also provide an appropriate bit depth for the neural network (see pars. 47 and 61 of Zhu). Janis in view of Zhu does not explicitly teach “generate a first temporary image having a narrower dynamic range than the merged image by performing tone mapping on the merged image.” Afra in the analogous art teaches generating a first temporary image having a narrower dynamic range than the merged image by performing tone mapping on the merged image (see, e.g., pars. 230 and 236-245 and FIGS. 19-20 of Afra, which teach performing tone mapping on input image of a neural network, causing the color values of the image to have a low dynamic range). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Janis in view of Zhu to use tone mapping as taught by Afra because doing so would make processing the input image less challenging that high dynamic range images (see, e.g., par. 230 of Afra). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Janis in view of Zhu and Afra and further in view of Us patent application publication no. 2018/0074980 to Kawahara. For claim 16, Janis in view of Zhu and Afra teaches that the first processor is configured to: determine network input data based on the decreased bit-precision (see, e.g., par. 87 of Janis, which teach reducing the precision of the network input data); store the network input data is stored in a first memory space of the first processor (see, e.g., pars. 347-350 of Janis, which teach storing data in the memory); and duplicate the network input data from the first memory space to a second memory space of the second processor while an operation of the first processor is stopped (see, e.g., pars. 347-350 of Janis, which teach transferring data between processing units using copy engines). Janis in view of Zhu and Afra does not explicitly teach stopping an operation of a processor, from which the data is duplicated. Kawahara in the analogous art teaches stopping an operation of a processor to copy data from a memory associated with the processor to a memory associated with another processor (see, e.g., par. 143 of Kawahara). It would have been obvious to one of ordinary skill in the art before effective filing date of the claimed invention to modify Janis in view Afra and Zhu to transfer data between memories as taught by Kawahara because doing so would make yield predictable results of sharing data between different processing units (see, e.g., pars. 4-6 and 26 of Kawahara and MPEP 2143(I)(D)). Allowable Subject Matter Claims 5-10 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In regard to claim 5, when considered as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “performing buffer layout conversion on the second temporary image to determine network input data, wherein the network input data has a depth characteristic layout instead of a spatial characteristic layout of the second temporary image.” In regard to claims 6-9, they depend on the objected claim 5. Therefore, by virtue of their dependency, claims 6-9 are also indicated as objected subject matter. In regard to claims 10 and 17, when considered as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “rendering the second input image frame at the second time point, wherein, according to an asynchronous pipeline method, the first super-sampled image frame is displayed at the second time point instead of the first time point when the first super-sampled image frame was generated, and wherein the second super-sampled image frame is displayed at a third time point instead of the second time point.” Additional Citations The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action. Citation Relevance Song et al. (WO pat. pub. 2023025245) Describes a video image processing method, a network training method, an electronic device and a computer-readable storage medium. In one embodiment, the video image processing method comprises: performing, by using a first capsule network, feature extraction on the current image and N frames of reference images adjacent to the current image, so as to obtain a feature vector of the current image and a feature vector of each frame of reference image, wherein N is an integer greater than or equal to 1; performing, by using a first attention network, correlation processing on the feature vector of the current image and the feature vectors of the reference images, so as to obtain a first correlation vector; performing, by using a first motion estimation network, motion estimation processing on the first correlation vector, so as to obtain first inter-frame motion information; according to the first inter-frame motion information, performing image transformation on the reference images, so as to obtain post-transformation reference images; performing, by using a first motion compensation network, fusion processing on the current image and all the post-transformation reference images, so as to obtain a first fused image; and performing super-resolution processing on the first fused image, so as to obtain a target image. Caballero et al. (us pat. 10701394) Describes a method for enhancing a section of lower-quality visual data using a hierarchical algorithm, specifically, the hierarchical algorithm that enhances a target section based on received sections of visual data. In one embodiment, the method includes selecting a plurality of low-resolution frames associated with a video, performing a first motion estimation between a first frame and a second frame, performing a second motion estimation between a third frame and the second frame, generating a high-resolution frame representing the second frame based on the first motion estimation, the second motion estimation and the second frame using a sub-pixel convolutional neural network. Table 1 Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Table 1 and form 892. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WOO C RHIM/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
Dec 05, 2025
Non-Final Rejection — §103
Jan 21, 2026
Examiner Interview Summary
Jan 21, 2026
Applicant Interview (Telephonic)
Feb 25, 2026
Response Filed
Apr 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601667
AUTOMATED TURF TESTING APPARATUS AND SYSTEM FOR USING SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12596134
DEVICE, MOVEMENT SPEED ESTIMATION SYSTEM, FEEDING CONTROL SYSTEM, MOVEMENT SPEED ESTIMATION METHOD, AND RECORDING MEDIUM IN WHICH MOVEMENT SPEED ESTIMATION PROGRAM IS STORED
2y 5m to grant Granted Apr 07, 2026
Patent 12591997
ARRANGEMENT DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586169
Mass Image Processing Apparatus and Method
2y 5m to grant Granted Mar 24, 2026
Patent 12579607
DEMOSAICING METHOD AND APPARATUS FOR MOIRE REDUCTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.4%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month