DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings were received on 11/06/2024 and 01/15/2025. These drawings are acceptable.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 16 recite the limitation "selected candidate video element". There is insufficient antecedent basis for this limitation in the claim. The verb “select” is presented in past tense suggesting a prior selection was made but is not positively recited in the claim language. Dependent claims fall together accordingly.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-9, 11-12, 16-17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sankaran et al. US 2020/0134089 A1, hereafter Sankaran, in view of Liu et al. US 2023/0118966 A1, hereafter Liu.
Regarding claim 1, Sankaran discloses a system (systems) [abstract], comprising:
one or more processors (processor 602) [FIG. 6] are configured to:
obtain a first set of candidate elements (final state vector 410; generate images for creative output) [FIG. 4; 0036] from a model (generator network 302) [FIG. 3] by prompting the model using a first prompt (semantic inputs 102; human semantic inputs 402) [FIG 1; FIG. 4] including at least base data (semantic inputs 102; human semantic inputs 402) [FIG. 1; FIG. 4], a first modifying action (semantic inputs 102; human semantic inputs 402) [FIG. 1; FIG. 4], and creator-specific information (creative recommendations based on users’ history 114) [FIG. 1];
cause the base data and the first set of candidate elements to be presented with first branching relationships at a user interface (final state vector 410; generate images for creative output) [FIG. 4; 0036];
receive, via the user interface, a selected candidate element from the first set of candidate elements and a second modifying action (previous state vector 412, context vector 414) [FIG. 4];
obtain a second set of candidate elements from the model by prompting the model using a second prompt including at least the selected candidate element (previous state vector 412, context vector 414) [FIG. 4], the second modifying action, and the creator-specific information (modified story 416, generate picture board) [FIG. 4]; and
cause the selected candidate element and the second set of candidate elements to be presented with second branching relationships at the user interface (modified story 416, generate picture board) [FIG. 4]; and
a database (cloud computing…memory, storage) [0064] configured to store a data structure comprising hierarchical data that describes branching relationships among at least the base data, the first set of candidate elements, and the second set of candidate elements (modified story 416) [FIG. 4].
However, while Sankaran discloses a cognitive assistant for co-generating creative content in the form of text and images, Sankaran fails to explicitly disclose candidate video elements.
Liu, in an analogous environment, discloses candidate video elements (generate a plurality of animated videos, where each animated video corresponds to a story image in the plurality of story images 408) [FIG. 4].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to generate candidate video elements from a prompt, as disclosed by Liu, with the invention disclosed by Sankaran, the motivation being generating a story video with content relevant to user input [0002].
Regarding claim 2, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Sankaran further discloses the video creator-specific information comprises at least one of profile data associated with a specified video creator and data derived from a set of representative videos associated with the specified video creator (creative recommendation based on users’ history 114) [FIG. 1].
Regarding claim 4, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Sankaran further discloses receive a text-based description or context corresponding to the base data; and include the text-based description or the context into the first prompt (semantic inputs 102) [FIG. 1].
Regarding claim 5, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Sankaran further discloses the model comprises a large language model (LLM) (bag-of-words (BOW) model…recurrent neural networks (RNNs)) probabilistic language model) [0015; 0028; 0039].
Regarding claim 6, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Sankaran further discloses the model comprises a large language model (LLM) in series with a text-to-image image model (bag-of-words (BOW) model…recurrent neural networks (RNNs)); pre-trained conditional generative adversarial network (GAN) model) [0015; 0016]
Regarding claim 7, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Sankaran further discloses the first modifying action comprises a static modifying action, wherein the static modifying action is predetermined, presented at, and user selected at the user interface (semantic inputs 102) [FIG. 1].
Regarding claim 8, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Sankaran further discloses the first modifying action comprises a user interactive modifying action, wherein the user interactive modifying action is dynamically user input at the user interface (semantic inputs 102) [FIG. 1].
Regarding claim 9, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Sankaran further discloses receive a user edit to the selected candidate video element, wherein the second prompt further includes the user edited selected candidate video element (update setting information 404) [FIG. 4].
Regarding claim 11, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Lio further discloses the first set of candidate video elements comprises a set of text-based candidate video elements (generated story text 210) [FIG. 2].
Regarding claim 12, Sankaran and Liu address all of the features with respect to claim 1 as outlined above.
Liu further discloses the first set of candidate video elements comprises a set of image-based candidate video elements (generated story images 200, 202, 204, 206) [FIG. 2].
Method claims 16, 17, 19, and 20 are drawn to the instructions implemented by system claims 1, 2, 4, and 9, and are therefore rejected in the same manner as above.
Allowable Subject Matter
Claims 3, 10, 13-15, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Citation of Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Zhang et al. US 12,494,004 B2 discloses feedback based instructional visual editing
Duerr et al. US 2025/0246206 A1 discloses AI-enhanced video editing
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEFAN GADOMSKI whose telephone number is (571)270-5701. The examiner can normally be reached Monday - Friday, 12-8PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
STEFAN GADOMSKI
Primary Examiner
Art Unit 2485
/STEFAN GADOMSKI/Primary Examiner, Art Unit 2485