Prosecution Insights
Last updated: April 19, 2026
Application No. 18/102,265

METHOD AND APPARATUS WITH IMAGE RECONSTRUCTION

Non-Final OA §103
Filed
Jan 27, 2023
Examiner
GOEBEL, EMMA ROSE
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
24 granted / 45 resolved
-8.7% vs TC avg
Strong +47% interview lift
Without
With
+47.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
18.2%
-21.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of priority from Foreign Application No. KR10-2022-0096814 , filed August 3, 2022. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 2, 2025 has been entered. Response to Arguments Examiner thanks Applicants’ for conducting the interview on 09/18/25. As noted in the “Applicant-Initiated Interview Summary” the agenda was discussed in detail, however, no agreement were reached. Examiner noted the arguments, but also noted further consideration would be required before any decision can be made. Applicant's arguments filed October 30, 2025 have been fully considered but they are not persuasive. Applicant argues that the Munkberg and Ma references cannot be combined because Munkberg leads away from teachings of Ma. Namely, Munkberg teaches a neural network layer "without hidden state recursion" and Ma discloses enabling "one-hidden-layer networks". Examiner respectfully disagrees. MPEP 2145(X)(D) states "a reference does not teach away if it merely expresses a general preference for an alternative invention but does not criticize, discredit or otherwise discourage investigation into the invention claimed." In this case, neither the Munkberg nor Ma reference indicates that the references cannot be combined because although Munkberg does not use hidden state recursion and Ma does, the Munkberg reference does not discourage the use of one-hidden-layer networks nor does it indicate that they would not be usable within the portions of the reference used to teach the disclosed invention. The Ma reference is relied upon to teach performing warping on a previous filter kernel. Examiner asserts that just because the Munkberg reference does not use hidden state recursion does not mean that one having ordinary skill in the art would not find it obvious to combine the teaching of warping a previous filter kernel of Ma with Munkberg's neural network models and previous filter kernel. Additionally, Examiner notes that the invention as claimed does not indicate whether hidden state recursion is even used to perform filter kernel warping. Thus, Applicant's claims are not persuasive and the rejection of the claims is upheld. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4 and 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Munkberg et al. (US 2020/0126191 A1) in view of Ma et al. (“Learning Invariant Representations with Kernel Warping”). Regarding claim 1, Munkberg teaches an image reconstruction method comprising: determining an image warping result by warping a previous reconstruction result using change-data comprising a difference between rendered images, wherein the change-data comprises a current change-data comprising a difference between a current rendered image and a previous rendered image (Munkberg, Para. [0028], the temporal warp function warps the external state based on the per-datum differences to produce the warped external state for time t-1. The warping aligns the external state from time t-1 to the input data at time t. Para. [0035], in an embodiment, the input data comprises rendered image frames); determining a previous filter kernel by executing a first neural network model with a previous rendered image and the image warping result (Munkberg, Para. [0040], The warped external state and the processed second input data frame are processed by the encoder/decoder neural network model (i.e., first neural network model) to generated spatially-varying filter kernels. When the sequence of input data is an image sequence, the combiner function applies a first filter kernel to pixels of the reconstructed first data frame (from time t-1) and applies a second filter kernel to pixels of the processed second input data frame (from time t)); determining a current reconstruction result by executing a second neural network model with a current rendered image, the current filter kernel, and the image warping result (Munkberg, Para. [0031], by applying the warped external recurrent neural network (i.e., second neural network model) over an image sequence, one image at a time, the neural network model outputs a sequence of temporally-stable reconstructed images, one image at a time. Para. [0037], The second input data frame is processed, based on the warped external state, using the layers of the neural network model to produce a reconstructed second data frame that approximates the second input data frame without artifacts). Although Munkberg teaches a previous filter kernel from time t-1 and a second filter kernel for time t (Munkberg, Para. [0040]), Munkberg does not explicitly teach “estimating a current filter kernel by warping the previous filter kernel using the change-data”. However, in an analogous field of endeavor, Ma teaches performing data-dependent kernel warping for modeling invariance, wherein the new kernel “warps” the original kernel by accounting for the invariances (Ma, Section 3). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Munkberg with the teachings of Ma by including performing warping on the previous filter kernel to create the current filter kernel. One having ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to combine these references because doing so would allow for a learning algorithm for invariances that retains computational and spatial efficiency, as recognized by Ma. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 2, Munkberg in view of Ma teaches the image reconstruction method of claim 1, and further teaches wherein the determining of the image warping result comprises: determining a first image warping result by warping a second previous reconstruction result reconstructed from a second previous rendered image using first change-data comprising a difference between a first previous rendered image and the second previous rendered image (Munkberg, Para. [0037], the external state is warped by the temporal warp function 115, using difference data corresponding to changes between the first input data frame and the second input data frame (e.g., optical flow, motion vectors, or the like), to produce warped external state); and determining a current image warping result by warping a first previous reconstruction result reconstructed from the first previous rendered image using the current change-data corresponding to a difference between the current rendered image and the first previous rendered image (Munkberg, Para. [0037], the external state is warped by the temporal warp function 115, using difference data corresponding to changes between the first input data frame and the second input data frame (e.g., optical flow, motion vectors, or the like), to produce warped external state. Warping the external state anchors individual characteristics or features to regions within the data frames. The warped external state enables improved tracking over time by integrating information associated with changing features over multiple frames in a sequence, producing more temporally stable and higher quality reconstructed data), wherein the first previous rendered image corresponds to a previous frame of the current rendered image and the second previous rendered image corresponds to a previous frame of the first previous rendered image (Munkberg, Para. [0035], a sequence of input data including artifacts is received by the warped external recurrent neural network. The sequence includes a first input data frame and a second input data frame. Para. [0031], by applying the warped external recurrent neural network (i.e., second neural network model) over an image sequence, one image at a time, the neural network model outputs a sequence of temporally-stable reconstructed images, one image at a time. The external state carries information about one or more previous images). Examiner interprets the current rendered image, first previous rendered image, and second previous rendered image to mean rendered image frames at time t, t-1, and t-2. Because the Munkberg reference teaches the warped external recurrent neural network over an image sequence one image at a time, Examiner submits that the warping is performed multiple times between the first and second previous rendered images and then between the first previous rendered image and current rendered image. If applicant would like the claim to be interpreted differently, further clarification through amendment is required. Regarding claim 3, Munkberg in view of Ma teaches the image reconstruction method of claim 2, and further teaches wherein the determining of the previous filter kernel comprises executing the first neural network model with the previous rendered image and the first image warping result (Munkberg, Para. [0040], the warped external state and the processed second input data frame are processed by the encoder/decoder neural network model 110 to generate spatially-varying filter kernels. When the sequence of input data is an image sequence, the combiner function 120 applies a first filter kernel to pixels of the reconstructed first data frame (from time t−1)). Regarding claim 4, Munkberg in view of Ma teaches the image reconstruction method of claim 2, and further teaches wherein the determining of the current reconstruction result comprises executing the second neural network model with the current rendered image, the current filter kernel, and the current image warping result (Munkberg, Para. [0031], by applying the warped external recurrent neural network (i.e., second neural network model) over an image sequence, one image at a time, the neural network model outputs a sequence of temporally-stable reconstructed images, one image at a time. Para. [0037], The second input data frame is processed, based on the warped external state, using the layers of the neural network model to produce a reconstructed second data frame that approximates the second input data frame without artifacts). Regarding claim 12, Munkberg in view of Ma teaches the image reconstruction method of claim 1, and further teaches wherein the first neural network model comprises an auto-encoder model comprising an encoding block and a decoding block (Munkberg, Paras. [0049]-[0050], a convolutional autoencoder may be used as a starting point to develop the encoder/decoder neural network model. The autoencoder includes a first encoder block and a succession of decoder stages). Regarding claim 13, Munkberg in view of Ma teaches the image reconstruction method of claim 12, wherein the decoding block comprises a convolutional recurrent layer configured to determine a current feature based on a previous feature (Munkberg, Para. [0055], Instead of receiving hidden state generated by the first convolutional layer during processing of a previous frame, the first convolutional layer receives the warped external state generated by the encoder/decoder neural network model during processing of a previous frame. The encoder/decoder neural network model functions as a single recurrent layer). Regarding claim 14, The image reconstruction method of claim 13, wherein the convolutional recurrent layer warps the previous feature using the change-data and determines the current feature based on a warping result (Ma, Section 3, performing data-dependent kernel warping for modeling invariance, wherein the new kernel “warps” the original kernel by accounting for the invariances). The proposed combination as well as the motivation for combining the Munkberg and Ma references presented in the rejection of Claim 1, apply to Claim 14 and are incorporated herein by reference. Thus, the method recited in Claim 14 is met by Munkberg in view of Ma. Claim 15 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 1. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Munkberg and Ma references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Munkberg and Ma references discloses a computer readable storage medium (Munkberg, Para. [0153], the memory, the storage, and/or any other storage are possible examples of computer-readable media). Claims 5-8, 16-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Munkberg et al. (US 2020/0126191 A1) in view of Ma et al. (“Learning Invariant Representations with Kernel Warping”), as applied to claims 1-4 and 12-15 above, and further in view of Chen et al. (US 2021/0133434 A1). Regarding claim 5, Munkberg in view of Ma teaches the image reconstruction method of claim 1, as described above. Although Munkberg in view of Ma teaches the method may be executed by a GPU, CPU, or any processor capable of implementing the neural network (Munkberg, Para. [0034]), they do not explicitly teach “the determining of the previous filter kernel by executing the first neural network model is performed by a first processing unit” and “the determining of the current reconstruction result by executing the second neural network model is performed by a second processing unit”. However, in an analogous field of endeavor, Chen teaches a first neural network run by the first neural network processor and the second neural network run by the second neural network processor (Chen, Para. [0108]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Munkberg in view of Ma with the teachings of Chen by including executing the first neural network for determining the previous filter kernel on a first processing unit and executing the second neural network for de4termining the current reconstruction result using a second processing unit. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for performing execution of two neural network models using separate processing units, as recognized by Chen. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 6, Munkberg in view of Ma further in view of Chen teaches the image reconstruction method of claim 5, and further teaches wherein the determining of the image warping result and the estimating of the current filter kernel are further performed by the first processing unit (Munkberg, Para. [0034], the method may be executed by a GPU, CPU, or any processor capable of implementing the warped external recurrent neural network). Regarding claim 7, Munkberg in view of Ma further in view of Chen teaches the image reconstruction method of claim 5, and further teaches wherein the first processing unit is configured to determine the previous filter kernel by executing the first neural network model independent of whether the current rendered image is generated (Munkberg, Para. [0040], the combiner function applies a first filter kernel (i.e., previous filter kernel) to pixels of the reconstructed first data frame (from time t-1) (i.e., independent of the current rendered image)). Regarding claim 8, Munkberg in view of Ma further in view of Chen teaches the image reconstruction method of claim 5, and further teaches wherein the previous rendered image comprises a first previous rendered image corresponding to a previous frame of the current rendered image or a second previous rendered image corresponding to a previous frame of the first previous rendered image (Munkberg, Para. [0077], the sequence includes a previous rendered image frame and the rendered image frame), the previous reconstruction result comprises a first previous reconstruction result reconstructed from the first previous rendered image or a second previous reconstruction result reconstructed from the second previous rendered image (Munkberg, Para. [0078], a reconstructed first rendered image frame that approximates the first rendered image frame without artifacts), and the first processing unit is configured to determine the previous filter kernel by executing the first neural network model based on the previous rendered image and the second previous reconstruction result, independent of whether the current rendered image and the first previous reconstruction result are generated (Munkberg, Para. [0040], the combiner function applies a first filter kernel (i.e., previous filter kernel) to pixels of the reconstructed first data frame (i.e., second previous reconstruction result) (i.e., independent of the current rendered image and the first previous reconstruction result)). Regarding claim 16, Munkberg teaches an image processing apparatus comprising: a first image processing unit configured to warp a previous reconstruction result using change-data comprising a difference between rendered images, and configured to determine an image warping result, wherein the change-data comprises a current change-data comprising a difference between a current rendered image and a previous rendered image (Munkberg, Para. [0028], the temporal warp function warps the eternal state based on the per-datum differences to produce the warped external state for time t-1. The warping aligns the external stated from time t-1 to the input data at time t. Para. [0035], in an embodiment, the input data comprises rendered image frames); and a second processing unit configured to determine a previous filter kernel by executing a first neural network model with the previous rendered image and the image warping result (Munkberg, Para. [0040], The warped external state and the processed second input data frame are processed by the encoder/decoder neural network model (i.e., first neural network model) to generated spatially-varying filter kernels. When the sequence of input data is an image sequence, the combiner function applies a first filter kernel to pixels of the reconstructed first data frame (from time t-1) and applies a second filter kernel to pixels of the processed second input data frame (from time t)), wherein the first processing unit is configured to determine a current reconstruction result by executing a second neural network model with a current rendered image, the current filter kernel, and the image warping result (Munkberg, Para. [0031], by applying the warped external recurrent neural network (i.e., second neural network model) over an image sequence, one image at a time, the neural network model outputs a sequence of temporally-stable reconstructed images, one image at a time. Para. [0037], The second input data frame is processed, based on the warped external state, using the layers of the neural network model to produce a reconstructed second data frame that approximates the second input data frame without artifacts). Although Munkberg teaches a previous filter kernel from time t-1 and a second filter kernel for time t (Munkberg, Para. [0040]), Munkberg does not explicitly teach the processing unit is “configured to estimate a current filter kernel by warping the previous filter kernel using the current change-data”. However, in an analogous field of endeavor, Ma teaches performing data-dependent kernel warping for modeling invariance, wherein the new kernel “warps” the original kernel by accounting for the invariances (Ma, Section 3). The proposed combination as well as the motivation for combining the Munkberg and Ma references presented in the rejection of Claim 1, apply to Claim 16 and are incorporated herein by reference. Although Munkberg in view of Ma teaches the method may be executed by a GPU, CPU, or any processor capable of implementing the neural network (Munkberg, Para. [0034]), they do not explicitly teach “the determining of the previous filter kernel by executing the first neural network model is performed by a first processing unit” and “the determining of the current reconstruction result by executing the second neural network model is performed by a second processing unit”. However, in an analogous field of endeavor, Chen teaches a first neural network run by the first neural network processor and the second neural network run by the second neural network processor (Chen, Para. [0108]). The proposed combination as well as the motivation for combining the Munkberg, Ma, and Chen references presented in the rejection of Claim 5, apply to Claim 16 and are incorporated herein by reference. Thus, the system recited in Claim 16 is met by Munkberg in view of Ma further in view of Chen. Regarding claim 17, Munkberg in view of Ma further in view of Chen teaches the image processing apparatus of claim 16, and further teaches wherein the previous rendered image comprises a first previous rendered image corresponding to a previous frame of the current rendered image or a second previous rendered image corresponding to a previous frame of the first previous rendered image (Munkberg, Para. [0077], the sequence includes a previous rendered image frame and the rendered image frame), the previous reconstruction result comprises a first previous reconstruction result reconstructed from the first previous rendered image or a second previous reconstruction result reconstructed from the second previous rendered image (Munkberg, Para. [0078], a reconstructed first rendered image frame that approximates the first rendered image frame without artifacts), and the first processing unit is configured to determine the previous filter kernel by executing the first neural network model based on the previous rendered image and the second previous reconstruction result, independent of whether the current rendered image and the first previous reconstruction result are generated Munkberg, Para. [0040], the combiner function applies a first filter kernel (i.e., previous filter kernel) to pixels of the reconstructed first data frame (i.e., second previous reconstruction result) (i.e., independent of the current rendered image and the first previous reconstruction result)). Regarding claim 19, Munkberg in view of Ma further in view of Chen teaches the image processing apparatus of claim 16, and further teaches wherein the first neural network model corresponds to an auto-encoder model comprising an encoding block and a decoding block (Munkberg, Paras. [0049]-[0050], a convolutional autoencoder may be used as a starting point to develop the encoder/decoder neural network model. The autoencoder includes a first encoder block and a succession of decoder stages), and the decoding block comprises a convolutional recurrent layer configured to determine a current feature based on a previous feature (Munkberg, Para. [0055], Instead of receiving hidden state generated by the first convolutional layer during processing of a previous frame, the first convolutional layer receives the warped external state generated by the encoder/decoder neural network model during processing of a previous frame. The encoder/decoder neural network model functions as a single recurrent layer). Regarding claim 20, Munkberg in view of Ma further in view of Chen teaches the image processing apparatus of claim 19, and further teaches wherein the convolutional recurrent layer is configured to warp the previous feature using the change-data, and is configured to determine the current feature based on a warping result (Ma, Section 3, performing data-dependent kernel warping for modeling invariance, wherein the new kernel “warps” the original kernel by accounting for the invariances). The proposed combination as well as the motivation for combining the Munkberg, Ma, and Chen references presented in the rejection of Claim 5, apply to Claim 20 and are incorporated herein by reference. Thus, the system recited in Claim 20 is met by Munkberg in view of Ma further in view of Chen. Allowable Subject Matter Claims 9-11 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. For Examiner’s statement of reasons for allowance, see the Non-Final Office Action mailed May 9, 2025. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Emma Rose Goebel/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Jan 27, 2023
Application Filed
May 05, 2025
Non-Final Rejection — §103
Aug 01, 2025
Response Filed
Aug 26, 2025
Final Rejection — §103
Sep 08, 2025
Interview Requested
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Oct 30, 2025
Response after Non-Final Action
Dec 02, 2025
Request for Continued Examination
Dec 17, 2025
Response after Non-Final Action
Jan 26, 2026
Non-Final Rejection — §103
Mar 24, 2026
Applicant Interview (Telephonic)
Mar 24, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597236
FINE-TUNING JOINT TEXT-IMAGE ENCODERS USING REPROGRAMMING
2y 5m to grant Granted Apr 07, 2026
Patent 12597129
METHOD FOR ANALYZING IMMUNOHISTOCHEMISTRY IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12597093
UNDERWATER IMAGE ENHANCEMENT METHOD AND IMAGE PROCESSING SYSTEM USING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12597124
DEBRIS DETERMINATION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12588885
FAT MASS DERIVATION DEVICE, FAT MASS DERIVATION METHOD, AND FAT MASS DERIVATION PROGRAM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+47.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month