DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This is in response to application filed on 03/26/2024 in which claim 1-21 are presented for examination.
Status of Claims
2. Claims 1-21 are pending, of which claim 1 and 21 are in independent form.
Allowable Subject Matter
3. Claims 7-12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1-4 and 17-21 are rejected under 35 U.S.C 103 as being unpatentable over Couleaud (US PG Pub 2025/0078346) published on March 06, 2025 in view of SETH et al. (US PG Pub 2022/0374590) published on November 24, 2022.
As per claim 1 and 21, Couleaud Teaches: A system for artificial intelligence (AI)-based interactive virtual asset composition, comprising: an input processor configured to receive an input specification for AI-based image generation(fig. 1and 6 Para[0064] shows user making input and machine learning/artificial intelligent is used to generate an image, as taught by Couleaud);
an AI-based image generation system configured to generate a base image based on the input specification(fig 6 para[0008] e.g. 604 can be a base image, as taught by Couleaud);
an AI-based image layer extraction system configured to automatically identify a plurality of layers of graphical content within the base image(fig 1-4 shows multiple layers e.g 416-420 corresponding to an image, as taught by Couleaud) , the AI-based image layer extraction system configured to automatically extract each of the plurality of layers of graphical content into a corresponding pre-layer image so as to generate a plurality of pre-layer images respectively corresponding to the plurality of layers of graphical content within the base image(Para[0040][0061] fig 2-4 the image processing system may generate a depth map 217 of first image 210 extracted from image 210 and defining a depth for each of the pixels of first image 210. For example images 234, 236 and 238 may correspond to respective layers(base image) of a multi-layer image, as taught by Couleaud);
an AI-based image auto-complete system configured to automatically complete each of the plurality of pre-layer images into a corresponding full-layer image so as to generate a plurality of full-layer images respectively corresponding to the plurality of layers of graphical content within the base image(Para[0001][abstract][0023][0008][0061][0077][0081] generating, using the first trained machine learning model, a plurality of images respectively corresponding to the plurality of textual descriptions. using one or more machine learning models to convert input text to an output image, and to perform regeneration of (and/or other modification of) one or more portions of such image to generate the multi-layer images. Images are generated automatically without any unser interaction, as taught by Couleaud);
a layer editing workbench controller configured to provide a user interface for display and editing of a selected one of the plurality of full-layer images(Para[0061] composite image 240, 242, 244, or 246 may be generated based on receiving user input, e.g., to edit or move around objects and/or layers corresponding to images 234, 236, and 238, as taught by ; and
an AI-based composite image generator configured to automatically combine the plurality of full-layer images as edited by the layer editing workbench controller into a composite image(Para[0061] composite image 240, 242, 244, or 246 generated based on receiving user input, e.g., to edit or move around objects and/or layers corresponding to images 234, 236, and 238, as taught by Couleaud).
Couleaud teaches extracting image layer and automatically generating the multi-layer image based on text input, without requiring editing of image layers by the user, but does not explicitly teach automatically extract each of the plurality of layers of graphical content
On the other hand, Seth teaches automatically extract each of the plurality of layers of graphical content(Para[0021][0023] application of trained AI processing to improve creation of presentation content including automatic adaption of an exemplary GUI object into a template for presentation content as well as automatic modification of content layers and content includes images. It is obvious to one ordinary skill in the art that automatically modification of the content layer can include extraction of the image layer, as taught by Seth).
It would have been obvious to one of ordinary skill in the art before the filing date of the
invention to modify Couleaud invention with the teaching of Seth because doing so would
result in increased efficiency by modifying the content faster.
As per claim 2, the combination of Couleaud and Seth teaches further comprising: a project workbench controller configured to provide a user interface for user entry of the input specification, the project workbench controller configured to convey the input specification to the AI-based image generation system and direct generation of the base image by the AI-based image generation system, the project workbench controller configured to direct display of the base image(fig 2 and 6-7 para[0083] e.g. 608 and 708 image displayed based on input, as taught by Couleaud).
As per claim 3, the combination of Couleaud and Seth teaches further comprising: a project history controller configured to track events performed within the system for AI-based image generation that affect one or more of the base image, the plurality of full-layer images (fig 6 para[0008] e.g. 604 can be a base image, as taught by Couleaud), and
the composite image, the project history controller configured to store a project record of the base image, the plurality of full-layer images, and the composite image after completion of each event(Para[0061][0081-0084], as taught by Couleaud).
As per claim 4, the combination of Couleaud and Seth teaches wherein the project history controller provides a first control that upon activation displays one or more of the base image(Para[0040] e.g. 234, 236 and 238 shows base image, as taught by Couleaud), the plurality of full-layer images, and the composite image of a selected stored project record.
As per claim 17, the combination of Couleaud and Seth teaches further comprising: a layer adding workbench controller configured to provide a user interface for display and adding of a new full-layer image to the plurality of full-layer images(fig 2-4 and 6, as taught by Couleaud).
As per claim 18, the combination of Couleaud and Seth teaches wherein the layer adding workbench controller provides a user interface for user entry of a layer input specification(fig 2 and 6 user make input, as taught by Couleaud), the layer adding workbench controller configured to convey the layer input specification to the AI-based image generation system and direct generation of the new full-layer image by the AI-based image generation system, the layer adding workbench controller configured to direct display of the new full-layer image(fig 2-6 image in being generated based on input, as taught by Couleaud).
As per claim 19, the combination of Couleaud and Seth teaches wherein the layer adding workbench controller provides for user entry of a layer mask specification to spatially control implementation of the new full-layer image within the composite image(fig 2-3 e.g. 240, 242, 236 and 304-306, as taught by Couleaud).
As per claim 20, the combination of Couleaud and Seth teaches wherein the layer adding workbench controller is configured to provide for adjustment of one or more of a brightness, a contrast, a color, a name, a depth, a lock status, and a visibility status of the new full-layer image within the composite image(Para[0040][0044][0089], as taught by Couleaud).
5. Claims 5-6, 13 and 15-16 are rejected under 35 U.S.C 103 as being unpatentable over Couleaud (US PG Pub 2025/0078346) published on March 06, 2025 in view of SETH et al. (US PG Pub 2022/0374590) published on November 24, 2022 in further view of Cohen (US PG Pub 2022/0058777) published on February 24, 2022.
As per claim 5, the combination of Couleaud and Seth does teach wherein the project history controller provides a second control that upon activation respectively reverts a current base image, a current plurality of full-layer images, and a current composite image to the base image, the plurality of full-layer images, and the composite image of a selected stored project record.
On other hand, Cohen teaches wherein the project history controller provides a second control that upon activation respectively reverts a current base image, a current plurality of full-layer images, and a current composite image to the base image, the plurality of full-layer images, and the composite image of a selected stored project record(Para[0088-0093] shows layered image and user can manipulate ans save the image layering. A user optionally uses toolbar 830 to undo any results of edited image 840, as taught by Cohen).
It would have been obvious to one of ordinary skill in the art before the filing date of the
invention to modify Couleaud and Seth invention with the teaching of Cohen because doing so would
result in increased efficiency by enabling a user to edit the segmentation map for each part more easily.
As per claim 6, the combination of Couleaud, Cohen and Seth teaches wherein the project history controller provides a third control that upon activation launches a new project within the system for AI-based image generation respectively having the base image, the plurality of full-layer images, and the composite image of a selected stored project record as a current base image, a current plurality of full-layer images, and a current composite image(Para[0082-0085][0088-0093], as taught by Cohen).
As per claim 13, the combination of Couleaud, Cohen and Seth teaches wherein the layer editing workbench controller provides a user interface for user entry of a layer input specification, the layer editing workbench controller configured to convey the layer input specification to the AI-based image generation system and direct generation of a new version of the selected one of the plurality of full-layer images by the AI-based image generation system, the layer editing workbench controller configured to direct display of the new version of the selected one of the plurality of full-layer images(fig 2 and 6 Para[0064] shows user making input and machine learning/artificial intelligent is used to generate an image with a back ground, as taught by Couleaud).
As per claim 15, the combination of Couleaud, Cohen and Seth teaches wherein the layer editing workbench controller is configured to provide for adjustment of a position of the selected one of the plurality of full-layer images within the composite image(Para[0061-0064] fig 2-4, as taught by Couleaud).
As per claim 16, the combination of Couleaud, Cohen and Seth teaches wherein the layer editing workbench controller is configured to provide for adjustment of one or more of a brightness, a contrast, a color, a name, a depth, a lock status, and a visibility status of the selected one of the plurality of full-layer images within the composite image(Para[0078] adjusting the contrast, as taught by Couleaud).
5. Claims 14 is rejected under 35 U.S.C 103 as being unpatentable over Couleaud (US PG Pub 2025/0078346) published on March 06, 2025 in view of SETH et al. (US PG Pub 2022/0374590) published on November 24, 2022 in further view of Bakunov (US PG Pub 2024/0296535) published on September 05, 2024.
As per claim 14, the combination of Couleaud, and Seth teaches wherein the layer editing workbench controller includes a refinement control that upon activation directs the AI-based image refinement system to generate a new version of the selected one of the plurality of full-layer images in accordance with a current relative refinement setting for the selected one of the plurality of full-layer images(Para[0061-0067][0087] layered image being adjusted based on input, as taught by Couleaud).
the combination of Couleaud, and Seth does not teach further comprising: an AI-based image refinement system configured to automatically adjust a realism of a given one of the plurality of full-layer images in accordance with a relative refinement setting,
On the other hand, Bakunov teaches further comprising: an AI-based image refinement system configured to automatically adjust a realism of a given one of the plurality of full-layer images in accordance with a relative refinement setting(Para[0002][0064][0108] A third quality metric assessed by the image quality evaluation system 236 (different from the first and second quality metrics) may be visual realism and the image quality evaluation system 236 may generate a third quality indicator in the form of a visual realism score for an AI-generated image and adjusting the setting appropriately based on score, as taught by Bakunov),
It would have been obvious to one of ordinary skill in the art before the filing date of the
invention to modify Couleaud and Seth invention with the teaching of Bakunov because doing so would
result in increased efficiency by allowing a collection manager to manage and curate a particular
collection of content.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAYEEZ R CHOWDHURY whose telephone number is (571)270-3069. The examiner can normally be reached Monday-Friday 9AM-6:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at 571-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAYEEZ R CHOWDHURY/Primary Examiner, Art Unit 2174
Saturday, March 21, 2026