Prosecution Insights
Last updated: April 19, 2026
Application No. 18/778,793

GENERATIVE AI-BASED VIDEO OUTPAINTING WITH TEMPORAL AWARENESS

Non-Final OA §101
Filed
Jul 19, 2024
Examiner
MCDOWELL, JR, MAURICE L
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
790 granted / 913 resolved
+24.5% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
23 currently pending
Career history
936
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 913 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 1 is directed to a method performed by an electronic device, with the steps of: obtaining, selecting, generating and generating which are nothing more than software instructions. Software instructions are non-statutory under 35 U.S.C. 101. Claims 2-7 depend from claim 1 and contain further steps, for example claim 2 contains the steps of grouping, selecting, generating and generating; therefore claims 2-7 have the same problem as claim 1 and are rejected under the same rationale. Allowable Subject Matter Claims 8-20 are allowed. The following is an examiner’s statement of reasons for allowance: Regarding claim 8 (claim 15 is similar in scope), the prior art doesn’t teach: select at least one of the image frames as a condition frame; generate, from each condition frame based on an image outpainting model, an outpainted condition frame that has a second aspect ratio different from the first aspect ratio; and generate, from at least one of remaining image frames based on a video outpainting model and the outpainted condition frames, an outpainted target frame that has the second aspect ratio, each outpainted target frame having spatial consistency with the image frame from which it was generated and temporal consistency with neighboring outpainted frames in the video. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. AGARWALA (US2013/0128121A1) discloses methods, apparatus, and computer-readable storage media for video completion that may be applied to restore missing content, for example holes or border regions, in video sequences. A video completion technique applies a subspace constraint technique that finds and tracks feature points in the video, which are used to form a model of the camera motion and to predict locations of background scene points in frames where the background is occluded. Another frame where those points were visible is found, and that frame is warped using the predicted points. A content-preserving warp technique may be used. Image consistency constraints may be applied to modify the warp so that it fills the hole seamlessly. A compositing technique is applied to composite the warped image into the hole. This process may be repeated until the missing content is filled on all frames; DA COSTA DE AZEVEDO (US2025/0111485A1) discloses a technique for generating data. The technique includes determining a plurality of flow vectors between a plurality of regions within a canonical space and a plurality of target spaces and generating, based on the plurality of flow vectors and a first noise sample associated with the canonical space, a plurality of noise samples associated with the plurality of target spaces. The technique also includes generating, via execution of a diffusion model based on the plurality of noise samples, a plurality of denoised intermediate samples associated with the plurality of target spaces and blending the plurality of denoised intermediate samples based on the plurality of flow vectors to generate a plurality of blended denoised intermediate samples associated with the plurality of target spaces. The technique further includes generating an output frame based on the plurality of blended denoised intermediate samples; GEORGIEV (US2015/0054853A1) discloses systems and methods of automatic image sizing are provided. An image is provided in a first frame within a first layout. A request to display the image in a second frame of a second layout is received, where the second frame is different than the first frame. Region data associated with the image is accessed. The region data corresponds to a prior edit to the image and indicates a portion of the image to be displayed in the second frame. The image is provided in the second frame using the region data such that the portion of the image is displayed in the second frame; JEFFERSON (US2025/0124626A1) discloses a method, apparatus, non-transitory computer readable medium, and system for image generation include obtaining, via a user interface, an input image and a user input that indicates a frame for modifying the input image including a first region inside of the input image and a second region outside of the input image, and excluding a third region inside of the input image. A modified image is generated using an image generation model. The modified image includes original content from the input image in the first region and generated content in the second region, and excluding content from the input image in the third region. The modified image is presented for display in the user interface; ZHOU (US2024/0331214A1) discloses systems and methods for image processing (e.g., image extension or image uncropping) using neural networks are described. One or more aspects include obtaining an image (e.g., a source image, a user provided image, etc.) having an initial aspect ratio, and identifying a target aspect ratio (e.g., via user input) that is different from the initial aspect ratio. The image may be positioned in an image frame having the target aspect ratio, where the image frame includes an image region containing the image and one or more extended regions outside the boundaries of the image. An extended image may be generated (e.g., using a generative neural network), where the extended image includes the image in the image region as well as generated image portions in the extended regions and the one or more generated image portions comprise an extension of a scene element depicted in the image; CRAGG (US2024/0273670A1) discloses systems and methods for image processing are provided. Embodiments of the present disclosure obtain an image and a target dimension for expanding the image. The system generates a prompt based on the image using a prompt generation network. A diffusion model generates an expanded image based on the image, the target dimension, and the prompt, where the expanded image includes additional content in an outpainted region that is consistent with content of the image and the prompt. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAURICE L MCDOWELL, JR whose telephone number is (571)270-3707. The examiner can normally be reached Mon-Thur & Sat: 2pm-10pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAURICE L. MCDOWELL, JR/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Feb 26, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602875
TECHNIQUE FOR THREE DIMENSIONAL (3D) HUMAN MODEL PARSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602887
AUGMENTED REALITY CONTROL SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12598281
CONTROL APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM FOR DETERMINING A CAMERA PATH INDICATING A MOVEMENT PATH OF A VIRTUAL VIEWPOINT IN A THREE-DIMENSIONAL SPACE
2y 5m to grant Granted Apr 07, 2026
Patent 12579741
DETECTING THREE DIMENSIONAL (3D) CHANGES BASED ON MULTI-VIEWPOINT IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12561905
Optimizing Generative Machine-Learned Models for Subject-Driven Text-to-3D Generation
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+12.9%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 913 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month