DETAILED ACTION
Notice of Pre-AIA or AIA Status
● The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
● This action is responsive to the following communication: US Patent Application filed on 8/15/2025.
● Claims 21-40 are currently pending; claims 1-20 have been canceled.
Information Disclosure Statement
● The information disclosure statement (IDS) submitted on 10/1/2025 & 10/6/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 21-40 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sadr et al (US 20240202796).
Regarding claim 21, Sadr discloses a system for a generative content assistant (machine learning model for generating imagined contents, fig. 1), comprising:
a processor programmed to:
access a request to generate an image (generate output image with prompts associated with contents, figs. 3A-3M) a content element associated with source content (source contents as shown in figs. 3A-3M, also fig. 5);
construct a prompt (input prompts a shown in fig. 5) for a generative language model, the prompt instructing the generative language model to identify one or more semantic attributes (semantic attributes, figs. 3B-3M) of the content element from the source content;
execute a generative language model (generative language model, figs. 5, 18-21) based on the prompt, and determine the one or more semantic attributes (semantic attributes for generation of images/videos, figs. 3B-3M) of the content element based on the executed generative language model;
construct a second prompt (second prompts/input and subsequent prompts/inputs, fig. 19) for a generative AI image model based on the one or more semantic attributes, the second prompt instructing the generative image model to generate an image of the content element based on the one or more semantic attributes (subsequent prompts for generation of imagining images/videos, figs. 3B-3M, 8C); and execute the generative AI image model (figs. 5, 18, 19, 20) based on the second prompt, and generate a visual output (visual output of imagined images/videos, figs. 3B-3M) based on execution of the generative Al image model, the visual output (visual output displayed on the user interface depicting the imagined contents with semantic attributes ash shown in figs.3B-3M, figs. 6-7) depicting the content element based on the semantic attributes identified in the source content (source contents as shown in figs. 3B-3M).
Regarding claim 22, Sadr further discloses the system of claim 21, wherein to construct the prompt for the generative language model the processor is further programmed to: identify portions (the cropping input can be descriptive of a portion of the particular model-generated dataset. The portion of the particular model-generated dataset can be segmented to generate a cropped model-generated dataset. In some implementations, the one or more search results can be determined based on the cropped model-generated dataset. par. 50, 60) of the source content that are semantically related to the content element.
Regarding claim 23, Sadr further discloses the system of claim 21, wherein to determine the one or more semantic attributes, the processor is further programmed to: identify descriptive statements (descriptive statements, par. 112), behaviors, properties, appearances, or interactions associated with the content element in the source content.
Regarding claim 24, Sadr further discloses the system of claim 21, wherein to determine the one or more semantic attributes, the processor is further programmed to: extract (extracting attributes from contextual information, par. 108) implicit or inferred attributes using contextual reasoning (par. 108) performed by the generative language model.
Regarding claim 25, Sadr further discloses the system of claim 21, wherein to construct the second prompt, the processor is further programmed to: format the one or more semantic attributes into structured image-generation parameters specifying at least one of lighting, posc, environment, attire, or stylistic constraints (user interface with multiple prompts as shown figs. 3A-3M).
Regarding claim 26, Sadr further discloses the system of claim 21, wherein to execute the generative AI image model, the processor is further programmed to: select the generative AI image (image generation model, par. 94) model from a plurality of available image models (par. 44) based on at least one of cost, performance, availability (par. 91), or model capabilities.
Regarding claim 27, Sadr further discloses the system of claim 21, wherein to generate the visual output, the processor is further programmed to: store the visual output together with metadata identifying the semantic attributes used to generate the second prompt (par. 47).
Regarding claim 28, Sadr further discloses the system of claim 21, wherein to generate the visual output, the processor is further programmed to: display the visual output through the graphical user interface together with textual explanations (textual explanations, figs. 3H-3M) derived from the one or more semantic attributes.
Regarding claim 29, Sadr further discloses the system of claim 21, wherein to construct the second prompt, the processor is further programmed to: generate alternative structured prompts specifying stylistic variations (stylistic variations as shown in fig. 3K-3M) for the visual depiction of the content element.
Regarding claim 30, Sadr further discloses the system of claim 21, wherein to construct the second prompt (second input data, par. 8), the processor is further programmed to: compute a semantic representation of the content element based on the one or more semantic attributes, the semantic representation comprising a machine-interpretable embedding used to refine (second input/prompt refine from the first input/prompt, par. 47) the second prompt.
Regarding claim 31, Sadr further discloses the system of claim 21, wherein to execute the generative Al image model, the processor is further programmed to: select a diffusion model (diffusion model, par. 43) configured to generate visual outputs (figs. 3A-3M) responsive to textual prompts.
Regarding claim 32, Sadr further discloses the system of claim 21, wherein the source content comprises a screenplay and the content element comprises a character (figs. 3K-3M) or an object referenced in the screenplay.
Regarding claim 33, Sadr further discloses the system of claim 32, wherein to generate the visual output (visual output as shown in figs. 3A-3M), the processor is further programmed to: instantiate an interactive persona (interactive display persona, figs. 3A-3M) associated with the character or object, the interactive persona configured to respond via text, voice, or animation based on semantic attributes (selectable attributes as shown in figs. 3A-3M) identified from the screenplay.
Regarding claims 34-40 recite limitations that are similar and in the same scope of invention as to those in claims 21-33 above and/or combination thereof; therefore, claims 34-40 are rejected for the same rejection rationale/basis as described in claims 21-33 above and/or combination thereof.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THIERRY L PHAM whose telephone number is (571)272-7439. The examiner can normally be reached M-F, 11-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THIERRY L PHAM/ Primary Examiner, Art Unit 2654