DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 1 is directed to a method performed by an electronic device, with the steps of: obtaining, selecting, generating and generating which are nothing more than software instructions. Software instructions are non-statutory under 35 U.S.C. 101.
Claims 2-7 depend from claim 1 and contain further steps, for example claim 2 contains the steps of grouping, selecting, generating and generating; therefore claims 2-7 have the same problem as claim 1 and are rejected under the same rationale.
Allowable Subject Matter
Claims 8-20 are allowed.
The following is an examiner’s statement of reasons for allowance:
Regarding claim 8 (claim 15 is similar in scope), the prior art doesn’t teach:
select at least one of the image frames as a condition frame;
generate, from each condition frame based on an image outpainting model, an outpainted condition frame that has a second aspect ratio different from the first aspect ratio; and
generate, from at least one of remaining image frames based on a video outpainting model and the outpainted condition frames, an outpainted target frame that has the second aspect ratio, each outpainted target frame having spatial consistency with the image frame from which it was generated and temporal consistency with neighboring outpainted frames in the video.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
AGARWALA (US2013/0128121A1) discloses methods, apparatus, and computer-readable storage media for video completion that may be applied to restore missing content, for example holes or border regions, in video sequences. A video completion technique applies a subspace constraint technique that finds and tracks feature points in the video, which are used to form a model of the camera motion and to predict locations of background scene points in frames where the background is occluded. Another frame where those points were visible is found, and that frame is warped using the predicted points. A content-preserving warp technique may be used. Image consistency constraints may be applied to modify the warp so that it fills the hole seamlessly. A compositing technique is applied to composite the warped image into the hole. This process may be repeated until the missing content is filled on all frames; DA COSTA DE AZEVEDO (US2025/0111485A1) discloses a technique for generating data. The technique includes determining a plurality of flow vectors between a plurality of regions within a canonical space and a plurality of target spaces and generating, based on the plurality of flow vectors and a first noise sample associated with the canonical space, a plurality of noise samples associated with the plurality of target spaces. The technique also includes generating, via execution of a diffusion model based on the plurality of noise samples, a plurality of denoised intermediate samples associated with the plurality of target spaces and blending the plurality of denoised intermediate samples based on the plurality of flow vectors to generate a plurality of blended denoised intermediate samples associated with the plurality of target spaces. The technique further includes generating an output frame based on the plurality of blended denoised intermediate samples; GEORGIEV (US2015/0054853A1) discloses systems and methods of automatic image sizing are provided. An image is provided in a first frame within a first layout. A request to display the image in a second frame of a second layout is received, where the second frame is different than the first frame. Region data associated with the image is accessed. The region data corresponds to a prior edit to the image and indicates a portion of the image to be displayed in the second frame. The image is provided in the second frame using the region data such that the portion of the image is displayed in the second frame; JEFFERSON (US2025/0124626A1) discloses a method, apparatus, non-transitory computer readable medium, and system for image generation include obtaining, via a user interface, an input image and a user input that indicates a frame for modifying the input image including a first region inside of the input image and a second region outside of the input image, and excluding a third region inside of the input image. A modified image is generated using an image generation model. The modified image includes original content from the input image in the first region and generated content in the second region, and excluding content from the input image in the third region. The modified image is presented for display in the user interface; ZHOU (US2024/0331214A1) discloses systems and methods for image processing (e.g., image extension or image uncropping) using neural networks are described. One or more aspects include obtaining an image (e.g., a source image, a user provided image, etc.) having an initial aspect ratio, and identifying a target aspect ratio (e.g., via user input) that is different from the initial aspect ratio. The image may be positioned in an image frame having the target aspect ratio, where the image frame includes an image region containing the image and one or more extended regions outside the boundaries of the image. An extended image may be generated (e.g., using a generative neural network), where the extended image includes the image in the image region as well as generated image portions in the extended regions and the one or more generated image portions comprise an extension of a scene element depicted in the image; CRAGG (US2024/0273670A1) discloses systems and methods for image processing are provided. Embodiments of the present disclosure obtain an image and a target dimension for expanding the image. The system generates a prompt based on the image using a prompt generation network. A diffusion model generates an expanded image based on the image, the target dimension, and the prompt, where the expanded image includes additional content in an outpainted region that is consistent with content of the image and the prompt.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAURICE L MCDOWELL, JR whose telephone number is (571)270-3707. The examiner can normally be reached Mon-Thur & Sat: 2pm-10pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MAURICE L. MCDOWELL, JR/Primary Examiner, Art Unit 2612