DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 11-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because it is directed to products that do not have a physical or tangible form, such as information (often referred to as "data per se") or a computer program per se (often referred to as "software per se") claimed as a product without any structural recitations; and because a computer readable storage medium can be comprised of transitory matter (ie. Carrier waves).
Examiner suggests adding "non-transitory" to the claim in order to overcome the rejection.
Claim 11. A computer program product comprising one or more non-transitory computer readable storage media having program instructions collectively stored on the one or more non-transitory computer readable storage media,…
Claim 20. A system comprising: a processor set, one or more non-transitory computer readable storage media, and program instructions collectively stored on the one or more non-transitory computer readable storage media,…
Variations of the term “storage”, for example in the term “computer readable storage medium” are not considered to limit a media claim to non-transitory embodiments because content may be considered to be stored on a signal during propagation and because many disclosures conflate storage media and signals. For example, US Patent 6,286,104 discloses: “the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium such as a carrier wave”. See the Board decision in ex parte Mewherter (10/685,192) where the Board affirmed 101 rejection of a “machine readable storage medium”. The decision is precedential, and while even precedential Board decisions are not considered to be examining guidance, the decision can be cited in an examiner’s answer.
Note that the decision also refers to Official guidance in the form of training delivered to the Corps: U.S. Patent & Trademark Office, Evaluating Subject Matter Eligibility Under 35 USC § 101 (Aug. 2012 Update); pp. 11-14, available at http://www.uspto.gov/patents/law/exam/101_training_aug2012.pdf.
Please note that even if the transitory types of machine readable medium are removed from the examples of machine readable medium cited in the disclosure, the broadest reasonable interpretation of a machine readable medium would still include transitory types unless there is a closed definition excluding them in the disclosure.
A claim drawn to a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 USC 101 by adding the limitation store in “non-transitory" or “tangible” computer readable storage medium to the claim.
Furthermore, according to the new "Subject Matter Eligibility of Computer Readable Medium" memo dated January, 2010, https://patentlyo.com/media/docs/2012/06/101_crm_20100127.pdf
A claim drawn to a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 USC 101 by adding the limitation store in “non-transitory" or “tangible” computer readable storage medium to the claim.
http://www.uspto.gov/patents/law/notices/101_crm_20100127.pdf
"Subject Matter Eligibility of Computer Readable Medium" memo dated January, 2010,
https://patentlyo.com/media/docs/2012/06/101_crm_20100127.pdf
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 4-12, 14-20 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Dela Rosa et al (US Pub 20240428470 A1).
Regarding Claim 1, Dela Rosa et al teaches a computer-implemented method, comprising:
generating, by a processor set, a first set of keywords based on a user’s stored preferences (see Paragraph [0117] “ … For example, if a post or content augmentations contains specific keywords or phrases, the post is labeled as belonging to a certain category. In some cases, the interaction system 100 uses a predefined list of keywords or phrases, and labels are assigned to the data based on the presence or frequency of these keywords in the content.”);
prioritizing, by the processor set, the first set of keywords into at least one subset of keywords (Paragraph [0095] “ In some cases, the interaction system 100 identifies a prompt of a user to find relevant content augmentations. Identifying the prompt for the first user includes receiving a question or request from the first user via text or speech. The interaction system 100 identifies keywords from the prompt and applies weights to each of the identified keywords. The interaction system 100 applies the identified keywords and corresponding weights to the machine learning model (such as the first machine learning model described further herein).”);
inputting, by the processor set, the at least one subset of keywords into a bi-directional attention-based long short-term memory recurrent neural network (Paragraph [0251] “ In some examples, the neural network 1726 may also be one of a number of different types of neural networks or a combination thereof, such as .. an Artificial Neural Network (ANN), a Recurrent Neural Network (RNN), a Long Short-Term Memory Network (LSTM), a Bidirectional Neural Network, a Generative Adversarial Network (GAN), … );
generating, by the processor set using the bi-directional attention-based long short-term memory recurrent neural network, at least one story comprising story text based on the at least one subset of keywords (Paragraph [0256] In some examples the trained machine-learning program 1702 may be a generative AI model. Generative AI is a term that may refer to any type of artificial intelligence that can create new content from training data 1704. For example, generative AI can produce text, images, video, audio, code or synthetic data that are similar to the original data ….”);
inputting, by the processor set, story text from the at least one story into a video generative model conditioned with images of objects referred to by the at least one story (see [0256] “ …. For example, generative AI can produce text, images, video, audio, code or synthetic data that are similar to the original data but not identical.
[0258] Convolutional Neural Networks (CNNs): CNNs are commonly used for image recognition and computer vision tasks. They are designed to extract features from images by using filters or kernels that scan the input image and highlight important patterns. CNNs may be used in applications such as object detection,….”); and
generating, by the processor set using the video generative model, a video comprising at least one generated video frame ([0082] “Data and various systems using augmented reality content items or other such transform systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked….”).
Regarding Claims 2, 12, Dela Rosa et al teaches the computer-implemented method further comprising verifying compliance of the at least one generated video frame with an embedded smart contract. ([0082] “Data and various systems using augmented reality content items or other such transform systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked….”).
Regarding Claims 4, 14, Dela Rosa et al teaches the computer-implemented method further comprising augmenting the first set of keywords with provider-specific keywords comprising augmenting the first set of key words with provider-specific keywords selected from a group consisting of current events, featured products, and individuals. ([0082] “Data and various systems using augmented reality content items or other such transform systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked….”).
Regarding Claims 5, 15, Dela Rosa et al teaches the computer-implemented method further comprising augmenting the first set of keywords with provider-specific keywords comprising augmenting the first set of keywords with a fixed set of provider-specific keywords for a fixed duration. ([0082] “Data and various systems using augmented reality content items or other such transform systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked….”).
Regarding Claims 6, 16, Dela Rosa et al teaches the computer-implemented method wherein generating at least one story-based text based on the at least one subset of keywords comprises:
inputting the at least one subset of keywords into the bi-directional attention-based long short-term memory recurrent neural network; and inferring relationships between words within the at least one subset of keywords. ([0262] Transformer models:”… These are models that use attention mechanisms to learn the relationships between different parts of input data (such as words or pixels) and generate output data based on these relationships. Transformer models can handle sequential data such as text or speech as well as non-sequential data such as images or code.”).
Regarding Claims 7, 17, Dela Rosa et al teaches the computer-implemented method wherein generating at least one story-based text on the at least one subset of keywords comprises conditioning the bi-directional attention-based long short-term memory recurrent neural network on factors selected from a group consisting of user emotion. ([0087] The system can capture an image or video stream on a client device (e.g., the user system 102) and perform complex image manipulations locally on the user system 102 while maintaining a suitable user experience, computation time, and power consumption. The complex image manipulations may include size and shape changes, emotion transfers (e.g., changing a face from a frown to a smile), state transfers (e.g., aging a subject, reducing apparent age, changing gender), …).
Regarding Claims 8, 18, Dela Rosa et al teaches the computer-implemented method wherein the video generative model comprises a time and frequency domain-based generative adversarial network.
([0260] Generative adversarial networks (GANs): These are models that consist of two neural networks: a generator and a discriminator. The generator tries to create realistic content that can fool the discriminator, while the discriminator tries to distinguish between real and fake content. …”);
([0094] … In some examples, the interaction system 100 captures usage data related to how and when the devices are used, session duration, frequency of use, and user engagement with specific content or applications.”).
Regarding Claims 9, 19, Dela Rosa et al teaches the computer-implemented method wherein generating video frames via the video generative model comprises generating video frames via a next-frame prediction GAN in operative cooperation with the video generative model. (“[0260] Generative adversarial networks (GANs): These are models that consist of two neural networks: a generator and a discriminator. The generator tries to create realistic content that can fool the discriminator, while the discriminator tries to distinguish between real and fake content. The two networks compete with each other and improve over time. GANs may be used in applications such as image synthesis, video prediction, and style transfer.”).
Regarding Claim 10, Dela Rosa et al teaches the computer-implemented method further comprising: determining a similarity score between the first set of keywords based on a user’s stored preferences and a second set of keywords based on a second user’s stored preferences; determining that the similarity score is above a predefined threshold; modifying the video comprising the at least one generated video frame via an object replacement generative adversarial neural network; and generating a second video. ( [0115] In some cases, the interaction system 100 returns the most similar images or content augmentations to the original image or content augmentations by selecting the top k images or content augmentations with the smallest Euclidean distances. The value of k depends on the desired number of similar images or content augmentations to be retrieved for training the models. By using the nearest neighbor algorithm and Euclidean distance, the interaction system 100 effectively identifies similar images or content augmentations from an original image or content augmentation.”),
[0117] In some cases, the interaction system 100 applies a set of rules or heuristics based on domain knowledge to automatically assign labels to the data. For example, if a post or content augmentations contains specific keywords or phrases, the post is labeled as belonging to a certain category. In some cases, the interaction system 100 uses a predefined list of keywords or phrases, and labels are assigned to the data based on the presence or frequency of these keywords in the content.”).
Regarding Claim 11, the CRM Claim 11 is rejected for same reason as the method Claim 1, since claim limitations are same in both claims (the CRM non-transitory computer readable storage medium is shown in Paragraph [0294] “Non-transitory computer-readable storage medium” refers, for example, to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine.”).
Regarding Claim 20, the apparatus Claim 20 is rejected for same reason as the method Claim 1, since claim limitations are same in both claims.
Allowable Subject Matter
Claims 3 and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is an examiner’s statement of reasons for allowance:
The prior arts fail to teach the computer-implemented method wherein verifying compliance of the at least one generated video frame with an embedded smart contract comprises:
verifying non-compliance of the at least one generated video frame with the embedded smart contract;
identifying at least one first generated video frame comprising a generated object in non- compliance with the embedded smart contract; and
replacing the at least one first generated video frame comprising a generated object in non-compliance with the embedded smart contract with at least one second generated video frame comprising a generated object in compliance with the embedded smart contract as claimed in Claims 3 and 13.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)).
Examiner’s Note
Examiner has cited particular paragraphs/columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Furthermore, the Examiner is not limited to Applicant’s definition which is not specifically set forth in the claims.
In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY SHANKAR whose telephone number is (571)272-7682. The examiner can normally be reached M-F 9 am- 6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIJAY SHANKAR/ Primary Examiner, Art Unit 2624