Prosecution Insights
Last updated: April 19, 2026
Application No. 18/426,015

Error-Aware Adaptive Video Interpolation

Non-Final OA §103§112
Filed
Jan 29, 2024
Examiner
JAMES, DOMINIQUE NICOLE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+38.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is in response to the application filed on January 29, 2025. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter. The limitations of independent claim 1 independent claim 8, and independent claim 15, includes “based on a comparison of the error estimate and one or more criteria” The limitation is interpretated as an error estimate is compared to a criteria such as a threshold. The claim language does not indicate what the one or more criteria are, therefore, it is unclear given the current limitations what “one or more criteria” are based on in the claim limitations. The dependent claims do not alleviate the issues of the independent claim and are also rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 8-9, and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al, US 20220375030 in view of Staranowicz et al, US 10489897 in view of Liang et al, US 20100033630. Regarding claim 1, Choi teaches a method for generating an interpolated frame, comprising (see Choi, Paragraph [0002], “a method and apparatus for interpolating an image frame”) generating, based on a first input frame and a second input frame of an input video, a first feature map and a second feature map (see Choi, Fig. 8, and Paragraph [0137], “In operation S810, an AI-based frame interpolation apparatus (900 of FIG. 9) obtains, from among consecutive frames of an image, first feature maps for a first frame at a plurality of levels and second feature maps for a second frame at a plurality of levels,” consecutive frames of an image are considered to be a video); generating a forward optical flow map and a backward optical flow map based on the first feature map and the second feature map (see Choi, Fig. 8 and Paragraph [0139], “In operation S820, the AI-based frame interpolation apparatus 900 obtains, via a flow estimation neural network, a first optical flow from a first feature map at a certain level to a second feature map at the certain level and a second optical flow from the second feature map at the certain level to the first feature map at the certain level,” a first optical flow from a first feature map to a second feature map is considered to be a forward optical flow map based on the first feature map; a second optical from flow the second feature map to the first feature map is considered to be a backward optical flow map based on the second feature map); Choi does not expressively teach generating an error estimate using the forward optical flow map and backward optical flow map selecting, based on a comparison of the error estimate and one or more criteria an interpolation module from a plurality of interpolation modules generating the interpolated frame using the selected interpolation module However, Staranowicz in similar invention in the same field of endeavor teaches generating an error estimate using the forward optical flow map and backward optical flow map (see Staranowicz , Fig. 2A and Col 6, Lines 47-60, “At operation 104, frame errors for the obtained two or more frames from a video sequence are determined. For example, in some implementations that utilize two frames of data obtained at operation 102, it may be desirable to calculate the forward and backward optical flows for the two frames of data and calculate the estimated frames based on the calculated optical flows. In other words, given two frames F.sub.1 and F.sub.2 and using the forward and backward optical flow calculations, one may determine F.sub.1est and F.sub.2est. Using these estimated frames, a comparison may be made with the actual two frames F.sub.1 and F.sub.2 in order to determine frame errors associated with the optical flow calculation”); selecting, based on a comparison of the error estimate and one or more criteria (see Staranowicz, Fig. 2A and Col 7, Lines 3-8, “At operation 106, the determined frame errors are compared against a threshold value parameter to determine whether or not the number of frame errors exceeds this threshold value. This threshold value may be selected as a static value (i.e., does not change over a given length of time), or may be selected as a dynamic value that varies as a function of, for example, time. Regardless of the type of threshold value chosen (i.e., static or dynamic), the value that may be ultimately entered into the threshold value parameter may take into consideration, for example, the available processing resources for the computing device (such as, e.g., the computing device 300 illustrated in FIG. 3), available bandwidth considerations for data transmission/reception for the interpolated/non-interpolated frames, and/or latency considerations associated with, for example, real-time playback of video content”), an interpolation module from a plurality of interpolation modules (see Staranowicz, Fig. 3 and Col 8, Lines 23-33, “A variety of combinations of the aforementioned single-pass and/or multi-pass operations may be utilized in order to, for example, reduce artifacts present within an interpolated frame. For example, a single-pass operation 110 may work adequately for many input-image frame pairs; however, when a determined pixel intensity error exceeds a given threshold, the multi-pass operation may be applied in addition to, or alternatively from, the aforementioned single-pass operation,” a single pass and multi-pass operations are considered to be a plurality of interpolation modules); The combination of Choi and Staranowicz are analogous art because they are both in the same field of endeavor of video frame interpolation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to determine frame errors, calculate forward and backward optical flows for two frames of data, compare the frame errors against a threshold parameter, and determine whether single-pass and/or multi-pass operations may be utilized for interpolation as taught in the method of Staranowicz in the method of Choi to address frame interpolation algorithm inaccuracies (Staranowicz, Col 6, Lines 18-28). Choi and Staranowicz does not expressively teach and combining the interpolated frame with the two input frames to generate at least a part of an output video. However, Liang in similar invention in the same field of endeavor teaches and combining the interpolated frame with the two input frames to generate at least a part of an output video (see Liang, Paragraph [0018], “The apparatus 300 generates a plurality of interpolated frames according to a plurality of input frames and then combines the interpolated frames with non-repeated frames in the input frames to produce output frames those are displayed on a display device. … When detecting that the input frames are video frames instead of film frames, the storage device 305 stores every frame in the successive frames without discarding any frame since the successive frames are different from each other in the video mode.”). The combination of Choi, Staranowicz, and Liang are analogous art because they are all in the same field of endeavor of video frame interpolation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to generate a plurality of interpolated frames and combine the interpolated frames with frames in the input frames to produce output frames that are displayed on a display device as taught in the method of Liang in the method of Choi in view of Staranowicz for effective use of the bandwidth and storage capacity of a storage device for generating interpolated frames are provided (Liang, Paragraph [0006]). Regarding claim 2, Choi in view of Staranowicz in view of Liang teaches the method of claim 1, generating a first warped feature map using the forward optical flow map and the first feature map associated with the first input frame, the first warped feature map being associated with a desired time for which the interpolated frame is to be generated; and generating a second warped feature map using the backward optical flow map and the second feature map associated with the second input frame, the second warped feature map being associated with the desired time (see Choi, Paragraph [0140], “In operation S830, the AI-based frame interpolation apparatus 900 obtains a forward-warped first feature map by forward-warping the first feature map based on the first optical flow and a forward-warped second feature map by forward-warping the second feature map based on the second optical flow”); wherein the first warped feature map and the second warped feature map are processed by the selected interpolation module to generate the interpolated frame (see Staranowicz, Fig. 1, Col 9, Lines 25-28,” At operation 118, the interpolated frame is generated,” interpolation models are taught in Staranowicz and therefore can be used for the warped feature maps). The rationale applied to the rejection of Claim 1 has been incorporated herein. As per claim 8, Claim 8 claims one or more computer-readable non-transitory storage media embodying software that is operable when executed to complete the same limitations as Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1. Regarding claim 8, Choi teaches one or more computer-readable non-transitory storage media embodying software that is operable when executed (see Choi, Paragraphs [0178]-[0179], “The above-described embodiments of the disclosure may be written as computer-executable programs or instructions that may be stored in a machine-readable storage medium. The machine-readable storage medium may be provided in the form of a non-transitory storage medium.”) As per claim 9, Claim 9 claims the same limitations as Claim 2 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 2. As per claim 15, Claim 15 claims one or more processors; and one or more computer-readable non-transitory storage media coupled to one or more of the processors and comprising instructions operable when executed by one or more of the processors to cause the system to complete the same limitations as Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1. Regarding claim 15, Choi teaches one or more processors; and one or more computer-readable non-transitory storage media coupled to one or more of the processors and comprising instructions operable when executed by one or more of the processors to cause the system to: (see Choi, Paragraphs [0169] “Furthermore, when they are implemented as the dedicated processor, the dedicated processor may include a memory for implementing an embodiment of the disclosure or a memory processing unit for using an external memory.”) As per claim 16, Claim 16 claims the same limitations as Claim 2 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 2. Claim(s) 3, 10, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al, US 20220375030 in view of Staranowicz et al, US 10489897 in view of Liang et al, US 20100033630 in view of Chi et al, US 10958869. Regarding 3, Choi in view of Staranowicz in view of Liang does not expressively teach, the method of claim 2, further comprising: generating a first warped input frame using the forward optical flow map and the first input frame, the first warped input frame being associated with the desired time for which the interpolated frame is to be generated and generating a second warped input frame using the backward optical flow map and the second input frame, the second warped input frame being associated with the desired time wherein the first warped input frame and the second warped input frame are additionally processed by the selected interpolation module to generate the interpolated frame However, Chi in similar invention in the same field of endeavor teaches generating a first warped input frame using the forward optical flow map and the first input frame, the first warped input frame being associated with the desired time for which the interpolated frame is to be generated (see Chi, Col 15, Lines 18-25, “At step 710, a first warped frame is generated by warping the higher-resolution first frame (606 at stage 2, 608 at stage 3) using the optical flow map generated using the previous flow estimation neural network stage (620,622 at stage 2, 624,626 at stage 3),” 620 is a forward optical flow map); and generating a second warped input frame using the backward optical flow map and the second input frame, the second warped input frame being associated with the desired time (see Chi, Col 15, Lines 23-29, “At step 712, a second warped frame is generated by warping the higher-resolution second frame (607 at stage 2, 609 at stage 3) using the optical flow map generated using the previous flow estimation neural network stage (620,622 at stage 2, 624, 626 at stage 3),” 622 is a backward optical flow map); wherein the first warped input frame and the second warped input frame are additionally processed by the selected interpolation module to generate the interpolated frame (see Chi, Col 15, Lines, 38-45, “At step 718, the candidate interpolated intermediate frame 352 which is to be used by the refinement neural network (e.g. MAR-Net 400) is generated by fusing the final first warped frame and the final second warped frame using the final occlusion map (e.g. 636) as a blending map”). The combination of Choi, Staranowicz, Liang, and Chi are analogous art because they are all in the same field of endeavor of video frame interpolation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to generate a first and second warped frame using a forward and backward optical flow map and generate an interpolated frame as taught in the method of Chi in the method of Choi in view of Staranowicz in view of Liang to improve video motion smoothness if frames are missing from a video recording or video stream, improve frame rate, and performance of post-processing slow-motion applications (Chi, Col 5, Lines 8-17). As per claim 10, Claim 10 claims the same limitations as Claim 3 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 3. As per claim 17, Claim 17 claims the same limitations as Claim 3 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 3. Claim(s) 4, 11, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al, US 20220375030 in view of Staranowicz et al, US 10489897 in view of Liang et al, US 20100033630 in view of Kung et al, US 20140023141. Regarding 4, Choi in view of Staranowicz in view of Liang does not expressively teach, the method of claim 1, wherein the comparison indicates that the error estimate is higher than a predetermined criterion, wherein the selected interpolation module generates the interpolated frame by duplicating either the first input frame or the second input frame. However, Kung in similar invention in the same field of endeavor teaches wherein the comparison indicates that the error estimate is higher than a predetermined criterion, wherein the selected interpolation module generates the interpolated frame by duplicating either the first input frame or the second input frame (see Kung, Paragraph [0029], “n step S28, when the total absolute difference (TAD) is not smaller than the first threshold value (a), i.e., TAD.gtoreq.a, indicating the current image frame being a transition frame, or when the total absolute difference (TAD) is smaller the third threshold value (c), i.e., TAD<c, indicating the current image frame being a stationary frame, the motion compensation processing unit 24 is configured to select frame repetition as the motion compensation mode based on the judging signal from the difference judging unit 23 such that the motion compensation processing unit 24 generates the interpolated image frame based on the current image frame and the reference image frame using frame repetition through duplicating the reference image frame,” the total absolute difference (TAD) is considered to be the error estimate). The combination of Choi, Staranowicz, Liang, and Kung are analogous art because they are all in the same field of endeavor of video frame interpolation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to calculate a total absolute difference and compare to a threshold and if higher than the threshold generate interpolated image frame through duplicating the reference image frame as taught in the method of Kung in the method of Choi in view of Staranowicz in view of Liang to effectively reduce motion blur in an image frame (Kung, Paragraph [0008]). As per claim 11, Claim 11 claims the same limitations as Claim 4 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale is analogous to that made in Claim 4. As per claim 18, Claim 18 claims the same limitations as Claim 4 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale is analogous to that made in Claim 4. Claim(s) 5-7, 12-14, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al, US 20220375030 in view of Staranowicz et al, US 10489897 in view of Liang et al, US 20100033630 in view of Djelouah et al, US 20240163395. Regarding 5, Choi in view of Staranowicz in view of Liang does not expressively teach, the method of claim 1, wherein the comparison indicates that the error estimate is lower than a predetermined criterion, wherein the selected interpolation module is a machine-learning model. However, Djelouah in similar invention in the same field of endeavor teaches wherein the comparison indicates that the error estimate is lower than a predetermined criterion (see Djelouah, Paragraph [0056], “Where error map 276 satisfies an error criteria, such as by including only error values falling below an error threshold, for example, interpolated frame 236 may deemed suitable for use without modification”), wherein the selected interpolation module is a machine-learning model (see Djelouah, Paragraph [0011], “the present uncertainty-guided video frame interpolation solution includes a machine learning model-based video frame interpolator capable of estimating the expected error together with the interpolated frame”). The combination of Choi, Staranowicz, Liang, and Djelouah are analogous art because they are all in the same field of endeavor of video frame interpolation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to determine whether an error map satisfies an error criteria and when it falls below an error threshold and the interpolated frame is deemed suitable without modification as taught in the method of Djelouah in the method of Choi in view of Staranowicz in view of Liang to improve interpolation quality. (Djelouah, Paragraph [0011]). Regarding 6, Choi in view of Staranowicz in view of Liang in view of Djelouah further teaches the method of claim 5, wherein the machine-learning model is trained to process at least the first warped feature map and the second warped feature map to generate the interpolated frame (see Choi, Paragraph [0062], “perform forward warping on the first feature map 130 at level 1 and thereby obtain a forward-warped first feature map at level 1… perform forward warping on the second feature map 135 at level 1 and thereby obtain a forward-warped second feature map at level 1,” and Paragraph [0059], “An optical flow may be used to interpolate a new frame between consecutive frames”). The rationale of claim 5 has been applied herein. Regarding 7, Choi in view of Staranowicz in view of Liang in view of Djelouah further teaches the method of claim 5, wherein the machine-learning model is jointly trained in an end-to-end manner with one or more additional machine-learning models used to generate (1) the first feature map and the second feature map (see Choi, Paragraph [0055], “Referring to FIG. 1, by inputting a first frame I.sub.0 100 and a second frame I.sub.1 105 among consecutive frames of an image to a first neural network for generating a feature map, a first feature map for the first frame I.sub.0 100 and a second feature map for the second frame I.sub.1 are obtained”) and (2) the forward optical flow map and the backward optical flow map (see Choi, Fig. 8 and Paragraph [0139], “In operation S820, the AI-based frame interpolation apparatus 900 obtains, via a flow estimation neural network, a first optical flow from a first feature map at a certain level to a second feature map at the certain level and a second optical flow from the second feature map at the certain level to the first feature map at the certain level,” a first optical flow from a first feature map to a second feature map is considered to be a forward optical flow map based on the first feature map; a second optical from flow the second feature map to the first feature map is considered to be a backward optical flow map based on the second feature map). The rationale of claim 5 has been applied herein. As per claim 12, Claim 12 claims the same limitations as Claim 5 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale is analogous to that made in Claim 5. As per claim 13, Claim 13 claims the same limitations as Claim 6 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 6. As per claim 14, Claim 14 claims the same limitations as Claim 7 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 7. As per claim 19, Claim 19 claims the same limitations as Claim 5 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale is analogous to that made in Claim 5. As per claim 20, Claim 20 claims the same limitations as Claim 6 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOMINIQUE JAMES/Examiner, Art Unit 2666 /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
Feb 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591976
CELL SEGMENTATION IMAGE PROCESSING METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12567138
REGISTRATION METROLOGY TOOL USING DARKFIELD AND PHASE CONTRAST IMAGING
2y 5m to grant Granted Mar 03, 2026
Patent 12548159
SCENE PERCEPTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12462681
Detection of Malfunctions of the Switching State Detection of Light Signal Systems
2y 5m to grant Granted Nov 04, 2025
Patent 12462346
MACHINE LEARNING BASED NOISE REDUCTION CIRCUIT
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.5%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month