Prosecution Insights
Last updated: April 19, 2026
Application No. 18/406,902

METHODS AND PROCESSORS FOR EXECUTING ADAPTIVE FRAME GENERATION

Non-Final OA §103
Filed
Jan 08, 2024
Examiner
VAUGHN, ALEXANDER JOSEPH
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
11 granted / 15 resolved
+11.3% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
20 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
30.0%
-10.0% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 9-11, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Fujisawa et al. (US 20100302438 A1), hereinafter Fujisawa, in view of Luo et al. (US 20250106355 A1), hereinafter Luo. Regarding claim 1, Fujisawa teaches A method for providing a current frame in a sequence of frames, ("Para. 18 and Fig. 9 "is an exemplary diagram illustrating a method for calculating motion vectors." Para. 27 see "a motion estimation module configured to calculate motion vectors for respective pixel blocks in each frame in moving image data by estimating motion between frames in the moving image data."). the method executable in real-time by a processor, (Para. 136 see "The CPU 71 is a processor configured to execute various programs. The CPU 71 executes various arithmetic processes and controls appropriate modules in the information processing apparatus 70."). the method comprising: at a current moment in time: determining a plurality of motion vectors for a current frame, the current frame including a plurality of pixels, a given motion vector from the plurality of motion vectors being a displacement of a given pixel between the current frame and an immediate predecessor frame; (Para. 30 and Fig. 9 "The motion vector estimation module 101 calculate motion vectors for respective pixel blocks in each frame (hereinafter also referred to as an image frame) in moving image data by estimating the motion between frames in the moving image data. Specifically, the motion vector estimation module 101 estimates the motion between an input image frame (target image frame) 52 in the moving image data and a preceding image frame 51 of the input image frame 52, and then calculates motion vectors 502 corresponding to the input image frame 52, in pixel block units."). determining a motion vector metric based on the plurality of motion vectors, the motion vector metric being indicative of an extent of change between the current frame and the immediate predecessor frame; (Para. 72 see "Thus, under the second condition, for example, if the difference between the motion vectors 502 of the input image frame 52 and the motion vectors 501 of the preceding image frame 51 is equal to or more than a second threshold, the input image frame 52 is determined as a redundant frame. Here, the difference between the motion vectors 502 of the input image frame 52 and the motion vectors 501 of the preceding image frame 51 indicates, for example, a sum of absolute difference calculated from the motion vectors." Paras. 73-77 further disclose how the difference metric is calculated.). selectively triggering, (see Fig. 1 and 12 (Specifically, The switching module 105). Para. 30 see "the motion vector estimation module 101 outputs the input image frame 52 and the motion vectors 502, to the switching module 105." Para. 31 see "The redundant frame determination module 102 outputs a determination result indicating whether the input image frame 52 is a redundant frame, to the switching module 105." Para. 32 see "The switching module 105 switches the succeeding process based on the determination result indicating whether the input image frame 52 is a redundant frame." Para. 33 see "If the input image frame 52 is determined to be a redundant frame, the switching module 105 discards information on the input image frame 52 and the motion vectors 502 of the input image frame 52. The switching module 105 avoids outputting the input frame 52 as output moving image data for the frame rate conversion apparatus 10." Para. 34 see "If the input image frame 52 is determined to be a non-redundant frame, the switching module 105 outputs the input image frame 52 and the motion vectors 502 of the input image frame 52, to the interpolation frame generation module 103."). based on the motion vector metric, one of: copying the immediate predecessor frame from the sequence of frames as the current frame; (Para. 31 see "The redundant frame determination module 102 determines whether the input image frame 52 is a redundant frame based on the motion vectors 502 of the input image frame 52 and motion vectors 501 of the preceding image frame 51. The motion vectors 501 of the preceding image frame 51 are stored by the data storing module 104 described below. Furthermore, the redundant frame is a frame generated by, for example, copying the preceding frame. The redundant frame determination module 102 outputs a determination result indicating whether the input image frame 52 is a redundant frame, to the switching module 105."). Fujisawa does not teach generating the current frame using a Neural Network (NN) based on the immediate predecessor frame; and rendering the current frame using a Graphical Processing Unit (GPU). However, Luo teaches generating the current frame using a Neural Network (NN) based on the immediate predecessor frame; (Para. 669 see "a neural network 6006 applies sets of motion vectors from motion vectors 6004 to frames of output frames 6008 to generate subsequent frames of output frames 6008. In at least one embodiment, a neural network 6006 utilizes motion vectors 6004 as part of one or more temporal feedback processes that apply motion vectors to output frames to generate subsequent output frames."). and rendering the current frame using a Graphical Processing Unit (GPU). (Para. 672 see "a neural network 6006 is trained by one or more systems that cause neural network 6006 to obtain a frame of input frames 6002 and perform one or more neural network image processing/generation/rendering operations (e.g., generate new pixels, modify existing pixels) to generate an output frame of output frames 6008." Paras. 101-102 disclose rendering frames with the GPU.). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fujisawa to incorporate the teachings of Luo to generate a current frame using a neural network based on the predecessor frame and to render the generated frame with a gpu. Doing so would predictably increase speed and performance of a game engine or graphics engine by predicting the current frame using a neural network to reduce the number of calculations required to generate the frame. Additionally, rendering the frame with a GPU would predictably increase the speed and performance of a game engine or graphics engine by using dedicated hardware designed to speed up calculations. Regarding claim 2, Fujisawa in view of Luo teaches The method of claim 1. In addition, Fujisawa teaches wherein the determining the motion vector metric includes: determining a magnitude of each one of the plurality of motion vectors; (Para. 69 see "the number of those of the pixel blocks set in the input image frame 52 for which the motion vector is a zero vector is equal to or more than a first threshold." (Examiner note: Determining if a vector is zero requires determining its magnitude.)). determining a magnitude of a gradient of motion vectors between the current frame and the immediate predecessor frame; (Para. 72 see "if the difference between the motion vectors 502 of the input image frame 52 and the motion vectors 501 of the preceding image frame 51 is equal to or more than a second threshold… indicates, for example, a sum of absolute difference calculated from the motion vectors." Para. 74 see "the absolute difference between the motion vectors mv.sub.a and mv.sub.b is calculated as follows. |mv.sub.ax-mv.sub.bx|+|mv.sub.ay-mv.sub.by| The sum of absolute difference is calculated by summing the calculated absolute differences for the respective pixel blocks."). determining the motion vector metric based on a maximum amongst the magnitudes of each of the plurality of motion vectors and the magnitude of the gradient of the motion vectors. (Para. 80 see "The redundant frame determination module 102 determines the input image frame 52 meeting the above-described two conditions as a redundant frame. Furthermore, the redundant frame determination module 102 determines that the input image frame 52 meeting one of the above-described two conditions or the input image frame 52 meeting neither of the above-described two conditions are not a redundant frame, that is, a non-redundant frame." (Examiner note: If either threshold is sufficient to trigger the non-redundant classification, the method is effectively selecting the maximum.)). Regarding claim 3, Fujisawa in view of Luo teaches The method of claim 1. In addition, Fujisawa teaches wherein the method further includes comparing the motion vector metric against one or more thresholds, (Para. 69 see "the number of those of the pixel blocks set in the input image frame 52 for which the motion vector is a zero vector is equal to or more than a first threshold."). and wherein the selectively triggering is executed based on the comparison between the motion vector metric and the one or more thresholds. (Para. 32 see "The switching module 105 switches the succeeding process based on the determination result indicating whether the input image frame 52 is a redundant frame."). Regarding claim 5, Fujisawa in view of Luo teaches The method of claim 3. In addition, Fujisawa teaches wherein the method further includes: in response to generating the current frame, adjusting a given threshold amongst the one or more thresholds. (Para. 79 see "The second threshold value may be calculated by multiplying the number of pixel blocks by a predetermined threshold in pixel block unit, or by multiplying a difference calculated in the preceding redundant-frame determination process by a predetermined rate, or may be a constant."). Claim 9 is rejected under the same analysis as claim 1 above. Claim 10 is rejected under the same analysis as claim 2 above. Claim 11 is rejected under the same analysis as claim 3 above. Claim 13 is rejected under the same analysis as claim 5 above. Claims 8, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Fujisawa et al. (US 20100302438 A1), hereinafter Fujisawa, in view of Luo et al. (US 20250106355 A1), hereinafter Luo, and Hu et al.: "A Dynamic Multi-Scale Voxel Flow Network for Video Prediction", Arxiv.org Cornell University Library, submitted 17 March 2023, [retrieved on 1-27-2026]. Retrieved from the internet <https://arxiv.org/abs/2303.09875>, hereinafter Hu. Regarding claim 8, Fujisawa in view of Luo teaches The method of claim 1. Fujisawa does not teach wherein the NN is a Dynamic Multiscale Voxel Flow Network (DMVFN) executable by the processor. However, Hu teaches wherein the NN is a Dynamic Multiscale Voxel Flow Network (DMVFN) executable by the processor. (Pg. 1, Col. 1, Para. 1 see "Video prediction aims to predict future video frames from the current ones." Pg. 2, Col. 1, Para. 2 see "We design a light-weight DMVFN to accurately predict future frames with only RGB frames as inputs."). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fujisawa and Luo to incorporate the teachings of Hu to use a DMVFN as the neural network to generate the current frame. Doing so would predictably achieve better frame prediction at lower computational costs by using a neural network designed to effectively perceive the motion scales of video frames. Claim 16 is rejected under the same analysis as claim 8 above. Allowable Subject Matter Claim(s) 4, 6-7, 12, 14-15 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jiang et al. (US 20190138889 A1) discloses a method for predicting video frames using video interpolation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER J VAUGHN whose telephone number is (571) 272-5253. The examiner can normally be reached M-F 8:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW MOYER can be reached on (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER JOSEPH VAUGHN/Examiner, Art Unit 2675 /EDWARD PARK/Primary Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Jan 08, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591955
SYSTEMS AND METHODS FOR GENERATING DYNAMIC DARK CURRENT IMAGES
2y 5m to grant Granted Mar 31, 2026
Patent 12579756
GRAPHICAL ASSISTANCE WITH TASKS USING AN AR WEARABLE DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12573010
IMAGE PROCESSING APPARATUS, RADIATION IMAGING SYSTEM, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12567265
VEHICLE, CONTROL METHOD THEREOF AND CAMERA MONITORING APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12521061
Method of Determining the Effectiveness of a Treatment on a Face
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+28.6%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month