Prosecution Insights
Last updated: April 19, 2026
Application No. 18/225,674

TEMPORALLY STABLE DATA RECONSTRUCTION WITH AN EXTERNAL RECURRENT NEURAL NETWORK

Non-Final OA §102§103
Filed
Jul 24, 2023
Examiner
ALGHAZZY, SHAMCY
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
49%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
30 granted / 62 resolved
-6.6% vs TC avg
Minimal +1% lift
Without
With
+0.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
25 currently pending
Career history
87
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
39.3%
-0.7% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 62 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are cancelled, claims 21-41 are pending and are being examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submissions filed on 08/19th/2025 have been entered. Information Disclosure Statement The information disclosure statements (IDSs) were submitted on 08/21st/2025, and 09/25th/2025. The submissions were in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Response to Arguments Applicant’s arguments, see REMARKS pages 7-8 filed 08/19th/2025, regarding the 35 USC § 103 rejection of claims 21-41 have been considered and they are moot in light of the new rejection below. Furthermore, Caballero teaches re-constructing a second frame (up-sampled) from a first frame (down-sampled) and modifying the second reconstructed (up-sampled) frame based on the motion of pixels between frames (Col. 20, Line 9-13) and (Col. 5, line 12-15)). Furthermore, the applicant cites and argues the amendment of modifying the input second frame to a modified generated version of an input first frame. However, this amendment appears to have been cancelled in the amended independent claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 21-22, 24, 27-29, 31, 34-36, and 38 are rejected under 35 U.S.C. 102 as being unpatentable over Caballero (US10701394). Regarding claim 21, Caballero teaches circuitry to cause one or more neural networks to generate a modified version of a frame based at least on generating an input second frame using the one or more neural networks based at least on an input first frame, and modifying the input second frame based at least on motion of one or more pixels between the input first frame and the input second frame. [Col. 17, Line 30-41] At step 160 (step 250), the reconstruction model is applied to each of the frames to output higher-resolution frames depending on the embodiment. The reconstruction, or decoding, process in most embodiments involves applying the optimized super resolution convolutional neural network model, or reconstruction model, for each scene in order to restore the lower-resolution video to its original resolution having substantially the same quality as the original high resolution video. Given the corresponding models for each lower-resolution frame, the original higher-resolution frames can be reconstructed with high accuracy for at least some of these embodiments. The examiner notes that Caballero teaches re-constructing a second frame (up-sampled) from a first frame (down-sampled) and modifying the second reconstructed (up-sampled) frame based on the motion of pixels between frames, and that the efficiency of sub-pixel convolution can be combined with the performance of spatio-temporal networks and motion compensation to obtain a fast and accurate video super-resolution algorithm (Col. 20, Line 9-13) and (Col. 5, line 12-15)). PNG media_image1.png 598 944 media_image1.png Greyscale Regarding claim 22, Caballero teaches the one or more processors of claim 21, wherein the one or more neural networks comprise an encoder/decoder neural network [Fig.12 and Fig. 14] PNG media_image2.png 680 1328 media_image2.png Greyscale PNG media_image3.png 474 1340 media_image3.png Greyscale The examiner notes that Caballero teaches a neural network that comprises an encoder (Fig, 12) and a decoder (Fig. 14)). Regarding claim 24 Caballero teaches The one or more processors of claim 21, where in the one or more neural networks are to apply at least a first portion of at least one filter kernel to the input first frame, wherein the input first frame comprises reconstructed data and to apply at least a second portion of the at least one filter kernel to the input second frame ([Col. 8-9, Line 61-5] Implementations can include one or more of the following features. For example, the sub-pixel convolutional neural network can use a spatio-temporal model, the spatio-temporal model can include at least one input filter with a temporal depth that matches a number of selected from the plurality of low-resolution frames, and the at least one input filter can collapse temporal information in a first layer. The sub-pixel convolutional neural network can use a spatiotemporal model, the spatio-temporal model can include at least one layer, and a first layer of the at least one layer can merge frames in groups smaller than an input number of frames. The examiner notes that Caballero teaches applying a filter kernel during image enhancement to at least two input frames one of which is modified [Page 22, Fig. 20]). Regarding claim 27, Caballero teaches wherein the input first frame and the input second frame are successive video frames [Col. 21, line 7-10] By scene, it is meant a consecutive group or sequence of frames, which at the coarsest level can be the entire video or at the most granular level, can be a single frame). Claims 28, 29, 31, and 34 are rejected based upon the same rationale as the rejection of claims 21, 22, 24 and 27 since they are the system claims corresponding to the processor claims. Claims 35, 37, and 38 are rejected based upon the same rationale as the rejection of claims 21, 22, and 24 since they are the method claims corresponding to the processor claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 23, 30, and 36, are rejected under 35 U.S.C. 103 as being unpatentable over Caballero (US10701394), in view of Wang (US 2019/0379883 Al). Regarding claim 23 Caballero teaches the processor of claim 21. However, Caballero is not relied upon to explicitly teach the one or more neural networks are to combine at least one filter kernel and at least two successive frames. On the other hand, Wang teaches the one or more neural networks are to combine at least one filter kernel and at least two successive frames ([0011] PNG media_image4.png 604 1328 media_image4.png Greyscale FIG. 2 shows the 3D convolution, wherein: every pixel value outputted by the 3D convolution layers is obtained though convolving pixel values in a 3x3 region corresponding to adjacent three frames and a convolution filter. The examiner notes that Wang teaches a CNN that applies a convolution kernel to two adjacent frames and modifying pixels in one frame based on the motion of pixels between that frame and another adjacent frame. The examiner further notes that Caballero and Wang are both considered to be analogous because they are in the same field of convolutional neural networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Caballero’s convolutional neural network to incorporate the one or more neural networks are to combine at least one filter kernel and at least two successive frames as taught by Wang [0011] to automatically complete invisible voids in the right eye views caused by occlusion or local displacement from left and right eye disparity [0011]). Claim 30 is rejected based upon the same rationale as the rejection of claims 23 since it is the system claim corresponding to the processor claim. Regarding claim 36 Caballero teaches the processor of claim 35. However, Caballero is not relied upon to explicitly teach the one or more neural networks are to combine at least one filter kernel and at least two successive frames, wherein the filter kernel is applied in connection with modifying the input second frame based on pixel motion between the input first frame and the input second frame. On the other hand, Wang teaches the one or more neural networks are to combine at least one filter kernel and at least two successive frames, wherein the filter kernel is applied in connection with modifying the input second frame based on pixel motion between the input first frame and the input second frame ([0011] PNG media_image4.png 604 1328 media_image4.png Greyscale FIG. 2 shows the 3D convolution, wherein: every pixel value outputted by the 3D convolution layers is obtained though convolving pixel values in a 3x3 region corresponding to adjacent three frames and a convolution filter. The examiner notes that Wang teaches a CNN that applies a convolution kernel to two adjacent frames and modifying pixels in one frame based on the motion of pixels between that frame and another adjacent frame. The examiner further notes that Caballero and Wang are both considered to be analogous because they are in the same field of convolutional neural networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Caballero’s convolutional neural network to incorporate the one or more neural networks are to combine at least one filter kernel and at least two successive frames, wherein the filter kernel is applied in connection with modifying the input second frame based on pixel motion between the input first frame and the input second frame as taught by Wang [0011] to automatically complete invisible voids in the right eye views caused by occlusion or local displacement from left and right eye disparity [0011]). Claims 25, 32, and 39, are rejected under 35 U.S.C. 103 as being unpatentable over Caballero (US10701394), in view of Giera (US20180311663A1). Regarding claim 25 Caballero teaches the processor of claim 21. However, Caballero is not relied upon to explicitly teach the one or more neural networks comprise two or more filter kernels to be applied to different respective areas of at least one of the input first frame or the input second frame. On the other hand, Giera teaches the one or more neural networks comprise two or more filter kernels to be applied to different respective areas of at least one of the input first frame or the input second frame ([0012] Each convolution layer convolves small regions of the image using a kernel (or multiple kernels). The examiner notes that Caballero and Giera are both considered to be analogous because they are in the same field of convolutional neural networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Caballero’s convolutional neural network to incorporate the one or more neural networks comprise two or more filter kernels to be applied to different respective areas of at least one of the input first frame or the input second frame as taught by Giera [0012] to generate activations to be used as the features that are input to the sub-classifier to assign a label to the input image [0012]). Claim 32 is rejected based upon the same rationale as the rejection of claims 25 since it is the system claim corresponding to the processor claim. Claim 39 is rejected based upon the same rationale as the rejection of claims 25 since it is the method claim corresponding to the processor claim. Claims 26, 33, and 40, are rejected under 35 U.S.C. 103 as being unpatentable over Caballero (US10701394), in view of BICHLER (US20210232897A1), further in view of Ali (US20120219236A1). Regarding claim 26 Caballero teaches the processor of claim 24. However, Caballero is not relied upon to explicitly teach the circuitry is to further generate different filter kernels to be used at different respective locations of at least the input first frame. On the other hand, Ali teaches the circuitry is to further generate different filter kernels to be used at different respective locations of at least the input first frame ([0040] In other words, the transformation produces a variable kernel size for filtering different regions (i.e., pixels) of the image. The examiner interprets each different sized kernel to be a different kernel used at a different location of the image. The examiner further notes that Caballero and Ali are both considered to be analogous because they are in the same field of image processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Caballero’s image processing to incorporate the circuitry is to further generate different filter kernels to be used at different respective locations of at least the input first frame as taught by Ali [0040] to determine a blur radius [0039]). Claim 33 is rejected based upon the same rationale as the rejection of claims 26 since it is the system claim corresponding to the processor claim. Claim 40 is rejected based upon the same rationale as the rejection of claims 26 since it is the method claim corresponding to the processor claim. Claim 41 is rejected under 35 U.S.C. 103 as being unpatentable over Caballero (US10701394) in view of Tschemezki (US20180336460A1). Regarding claim 41 Caballero teaches the processor of claim 21. However, Caballero is not relied upon to explicitly teach the circuitry is further to provide an external recurrent neural network that is separate from a convolutional encoder-decoder network and to maintain temporal state information that provides temporal information for reconstructing one or more subsequent frames. On the other hand, Tschemezki teaches the circuitry is further to provide an external recurrent neural network that is separate from a convolutional encoder-decoder network and to maintain temporal state information that provides temporal information for reconstructing one or more subsequent frames ([0014] Neural networks, including CNNs (Convolutional Neural Networks) and LSTM (Long Short-Term Memory) networks, can be utilized for prediction. A CNN can incorporate spatially local properties of wildfires. A LSTM network can include the architecture of a CNN and can account for the temporal properties of wildfires and vegetation states. The examiner notes that Tschemezki teaches using a system consisting of a CNN for spatial features and an LSTM, which is a type of RNN [0002], for temporal properties. The examiner further notes that Caballero and Tschemezki are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Caballero’s machine learning system to incorporate the circuitry is further to provide an external recurrent neural network that is separate from a convolutional encoder-decoder network and to maintain temporal state information that provides temporal information for reconstructing one or more subsequent frames as taught by Tschemezki [0014] to predict the spatial and temporal properties of wildfires and vegetation states [0014]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Boulanger-Lewandowski (US 2015/0242180 Al) “Boulanger-Lewandowski teaches a method for sound processing using RNNs” Calle (US 2018/0358003 Al) “Calle teaches a method for improving speech quality” Navarrete (US 2019/0014320 Al) “Navarrete teaches an image encoding/decoding method using convolutional neural networks” Takagi (2012/0051426 Al) “Takagi teaches a classification method to specify a frame to be subjected to sharp or blurred process” Zhang (US 2017/0345140 Al) “Zhang teaches a method for generating a simulated image from an input image” Vogels (US 2018/0293713 Al) “Vogels teaches a method for applying supervised machine learning using neural networks in denoising images rendered by Monte Carlo path tracing” Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAMCY ALGHAZZY whose telephone number is (571)272-8824. The examiner can normally be reached on M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, OMAR FERNANDEZ RIVAS can be reached on (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAMCY ALGHAZZY/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Sep 07, 2023
Response after Non-Final Action
Oct 19, 2024
Non-Final Rejection — §102, §103
May 01, 2025
Response Filed
May 23, 2025
Final Rejection — §102, §103
Jul 22, 2025
Examiner Interview Summary
Jul 22, 2025
Applicant Interview (Telephonic)
Aug 19, 2025
Request for Continued Examination
Aug 28, 2025
Response after Non-Final Action
Nov 26, 2025
Non-Final Rejection — §102, §103
Jan 27, 2026
Applicant Interview (Telephonic)
Jan 28, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596925
SINGLE-STAGE MODEL TRAINING FOR NEURAL ARCHITECTURE SEARCH
2y 5m to grant Granted Apr 07, 2026
Patent 12596922
ACCELERATING NEURAL NETWORKS IN HARDWARE USING INTERCONNECTED CROSSBARS
2y 5m to grant Granted Apr 07, 2026
Patent 12579408
ADAPTIVELY TRAINING OF NEURAL NETWORKS VIA AN INTELLIGENT LEARNING MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572847
SYSTEMS AND METHODS FOR RESOURCE-AWARE MODEL RECALIBRATION
2y 5m to grant Granted Mar 10, 2026
Patent 12566966
TRAINING ADAPTABLE NEURAL NETWORKS BASED ON EVOLVABILITY SEARCH
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
49%
With Interview (+0.7%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 62 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month