Prosecution Insights
Last updated: April 19, 2026
Application No. 18/164,373

GENERATING USER-SPECIFIC SYNTHETIC CONTENT UTILIZING MACHINE LEARNING WITH USER CREATED CONTENT ITEMS

Final Rejection §101§103
Filed
Feb 03, 2023
Examiner
SHAIKH, ZEESHAN MAHMOOD
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Dropbox Inc.
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
16 granted / 31 resolved
-10.4% vs TC avg
Strong +55% interview lift
Without
With
+55.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
25.7%
-14.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is responsive to the applicant’s response dated 11/14/2025. The applicant amended claims 1, 10-11, and 16. Response to Arguments Applicant’s arguments with respect to the correction of the specification, see Remarks (pg. 12, line 13 – pg. 13, line 2), filed 11/14/2025, with respect to the specification/abstract have been fully considered and are persuasive. The objection of the specification/abstract has been withdrawn. Applicant's arguments with respect to 35 U.S.C. 101 filed 11/14/2025 have been fully considered but they are not persuasive. The content generation model is passive and information is being identified for it. The model is being utilized in some manner which a human can practically do without actual active processing of that model. The claim lacks detail of what the model is doing or details of the model itself. The applicant is encouraged to go into detail how they are modifying the parameters or how the model is specifically trained to generate the content item based upon the user attribute. Alternatively, the applicant is encouraged to discuss what the model is and what is it doing to generate the item. The applicant should be more specific of how the model is trained or how the attribute is used by the trained model to generate the new content item. Applicant’s arguments with respect to 35 U.S.C. 102 for claims 1-3, 5, 9-13, 16 and 17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Given the amendments, a new ground of rejection is provided below. Applicant’s arguments with respect to 35 U.S.C. 103 for claims 4, 6-8, 14, 15, and 18-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Given the amendments, a new ground of rejection is provided below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to and abstract idea without significantly more. Independent claim 1 recites, “identifying, for a user account of a content management system, one or more content items for fine-tuning parameters of a content generation model trained utilizing training content items to generate new content items, wherein the training content items are different from the one or more content items”, “modifying the parameters of the content generation model based on at least one attribute associated with the one or more content items, wherein the at least one attribute comprises a visual characteristic of the one or more content items associated with the user account”, “receiving, from a client device associated with the user account, a request to generate a new content item”, and “in response to the request to generate the new content item, generating a custom content item utilizing the content generation model to synthesize the at least one attribute associated with the one or more content items associated with the user account within the custom content item.” The limitation of identifying content for adjustment, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind. Nothing in the claim precludes the step from practically being performed in the mind. For example, “identifying” in the context of this claim encompasses selecting text, which a human can do both in the mind or with a pen and paper. Next, the limitation of modifying parameters of a model, under its broadest reasonable interpretation, covers performance of the limitation in the mind. Nothing in the claim precludes the step from practically being performed in the mind. For example, “modifying” in the context of this claim encompasses adjusting rules, which a human can do both in the mind or with a pen and paper. Next, the limitation of receiving a request, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a client device”, nothing in the claim precludes the step from practically being performed in the mind. For example, “receiving” in the context of this claim encompasses receiving a command, which a human can do both in the mind or with a pen and paper. Lastly, the limitation of generating a custom content item, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “model”, nothing in the claim precludes the step from practically being performed in the mind. For example, “generating” in the context of this claim could encompass editing text, which a human can do both in the mind or with a pen and paper. The judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements, using a “a client device” and a “model”, to modify content. These elements in these steps are recited at a high-level of generality such that is amounts no more than mere instructions to apply the exception using generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a client device and a model to perform content modification amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Dependent claims 2-10 are also rejected for the same reasons provided in independent claim 1 above. The dependent claim, including the further recited limitation, does not integrate the abstract idea into a practical application and the additional elements, taken individually and in combination do not contribute to an inventive concept. In other words, the dependent claim is directed to an abstract idea without significantly more. Independent claim 11 recites, “identify, for a user account of a content management system, one or more content items associated with the user account”, “receive, from a client device associated with the user account, a request to generate a new content item based on a content description corresponding to the request”, and “in response to the request to generate the new content item, generate a custom content item by utilizing a content generation model trained utilizing training content items different from the one or more content items to synthesize content that depicts the content description from the request with at least one attribute from the one or more content items within the custom content item, wherein the at least one attribute comprises a visual characteristic of the one or more content items associated with the user account”. The limitation of identifying content for adjustment, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “non-transitory computer-readable medium” and “a processor”, nothing in the claim precludes the step from practically being performed in the mind. For example, “identify” in the context of this claim encompasses selecting text, which a human can do both in the mind or with a pen and paper. Next, the limitation of receiving a request, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a client device”, nothing in the claim precludes the step from practically being performed in the mind. For example, “receive” in the context of this claim encompasses receiving a command, which a human can do both in the mind or with a pen and paper. Lastly, the limitation of generating a custom content item, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “model”, nothing in the claim precludes the step from practically being performed in the mind. For example, “generate” in the context of this claim could encompass editing text, which a human can do both in the mind or with a pen and paper. The judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements, using a “a client device”, “model”, “non-transitory computer-readable medium” and “processor” to modify content. These elements in these steps are recited at a high-level of generality such that is amounts no more than mere instructions to apply the exception using generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a client device, a model, a non-transitory computer-readable medium, and processor to perform content modification amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Dependent claims 12-15 are also rejected for the same reasons provided in independent claim 11 above. The dependent claim, including the further recited limitation, does not integrate the abstract idea into a practical application and the additional elements, taken individually and in combination do not contribute to an inventive concept. In other words, the dependent claim is directed to an abstract idea without significantly more. Independent claim 16 recites, “identify, for a user account of a content management system, a set of content items for fine-tuning parameters of a content generation model trained utilizing training content items to generate new content items, wherein the training content items are different from the set of content items”, “modify the parameters of the content generation model based on at least one attribute associated with the set of content items, wherein the at least one attribute comprises a visual characteristic of the set of content items associated with the user account” “receive, from a client device associated with the user account, a request to generate a new content item based on a user selection of a subset of content items from the set of content items”, and “in response to the request to generate the new content item and the user selection of the subset of content items, generate a custom content item utilizing the content generation model to synthesize at least one attribute associated with the subset of content items associated with the user account within the custom content item”. The limitation of identifying a set of content for adjustment, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting, “non-transitory computer-readable medium” and “processor”, nothing in the claim precludes the step from practically being performed in the mind. For example, “identify” in the context of this claim encompasses selecting a set of text, which a human can do both in the mind or with a pen and paper. Next, the limitation of modifying parameters of a model, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting, “non-transitory computer-readable medium” and “processor”, nothing in the claim precludes the step from practically being performed in the mind. For example, “modify” in the context of this claim encompasses adjusting rules, which a human can do both in the mind or with a pen and paper. Next, the limitation of receiving a request, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a client device”, nothing in the claim precludes the step from practically being performed in the mind. For example, “receive” in the context of this claim encompasses receiving a command, which a human can do both in the mind or with a pen and paper. Lastly, the limitation of generating a custom content item, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “model”, “non-transitory computer-readable medium” and “processor”, nothing in the claim precludes the step from practically being performed in the mind. For example, “generate” in the context of this claim could encompass editing text, which a human can do both in the mind or with a pen and paper. The judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements, using “a client device”, a “model”, “non-transitory computer-readable medium” and “processor” to modify content. These elements in these steps are recited at a high-level of generality such that is amounts no more than mere instructions to apply the exception using generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a client device, a model, a non-transitory computer-readable medium, and processor to perform content modification amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Dependent claims 17-20 are also rejected for the same reasons provided in independent claim 16 above. The dependent claim, including the further recited limitation, does not integrate the abstract idea into a practical application and the additional elements, taken individually and in combination do not contribute to an inventive concept. In other words, the dependent claim is directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 9-13, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. US 20230042221 A1 (hereinafter Xu) in view of Wilson et al. US 20220377257 A1 (hereinafter Wilson) Regarding claim 1, Xu teaches a computer-implemented method comprising: identifying, for a user account of a content management system, one or more content items for fine-tuning parameters of a content generation model trained to generate new content items; (FIG. 2A, 202, 208 [0042] “the language-guided image-editing system 106 receives an image 202 displayed within a client device 204”, examiner interprets 202 as an example of a content item; [0046] “the language-guided image-editing system 106 utilizes the cycle augmented generative adversarial neural network 208 to generate a modified image that reflects the modification request from the natural language text 206 within the image 202”, examiner interprets 208 to be the content generation model; FIG. 1, 112, 106, [0035] “the client device 110 is operated by a user to perform a variety of functions (e.g., via a digital graphics application 112)”, examiner interprets the user account to be linked to the graphics application and 106 to be the content management system.); modifying the parameters of the content generation model based on at least one attribute associated with the one or more content items, (FIG. 3, [0038] “the language-guided image-editing system 106 on the server device(s) 102 learns parameters for one or more neural networks…the client device 110 obtains (e.g., downloads) the language-guided image-editing system 106 with one or more neural network with the learned parameters from the server device. modified digital images in accordance with natural language requests independent from the server device(s) 102” [0054] “[0054] For example, FIG. 3 illustrates the language-guided image-editing system 106 learning parameters of an editing description neural network”); receiving, from a client device associated with the user account, a request to generate a new content item (FIG. 2A, 206 (request), 210 (new content), 204 (client device)); and in response to the request to generate the new content item, generating a custom content item utilizing the content generation model to synthesize the at least one attribute associated with the one or more content items within the custom content item (FIG. 2A, 210, [0046] “the language-guided image-editing system 106 utilizes a cycle augmented generative adversarial neural network 208 with the natural language text 206 (e.g., a visual modification request to “increase brightness and contrast for the image”) and the image 202 to generate the modified image 210”, in this example, examiner interprets the attribute to be the brightness and contrast). Xu fails to teach a content generation model trained utilizing training content items to generate new content items, wherein the training content items are different from the one or more content items; wherein the at least one attribute comprises a visual characteristic of the one or more content items associated with the user account, and one or more content items associated with the user account However, Wilson teaches a content generation model trained utilizing training content items to generate new content items, wherein the training content items are different from the one or more content items (FIG. 6, [0023] “embodiments can receive a first image (e.g., a screenshot) or set of images (e.g., a video feed) of a first user that indicates the personal style of the first user, such as a hair style or makeup style of the first user. The first image can then be fed to one or more machine learning models in order to learn and capture the personal style of the first user. For example, particular models (e.g., a modified Generative Adversarial Network (GAN)) can perform several training epochs to learn that the first user always wears blue eyeshadow with blue lipstick at a particular pattern”, FIG. 3, 302, 302, [0065]); wherein the at least one attribute comprises a visual characteristic of the one or more content items associated with the user account ([0028] “these modified models can use multiple discriminators that distinguish whether generated images include the personal image style and are real or fake so as to make the output image as realistic as possible”; FIG. 5D, 551, examiner interprets the elements of 551 as visual characteristics associated with the user’s account), and one or more content items associated with the user account (FIG. 5D, 540, 541) Xu in view of Wilson is considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques performing language guided digital image editing utilizing a cycle-augmentation generative-adversarial neural network (CAGAN) that is augmented using a cross-modal cyclic mechanism of Xu with the technique of training a generation model to produce attribute specific content taught by Wilson in order to improve techniques of applying data indicative of a personal style to a feature of a user represented in one or more images based on determining or estimating the personal style (see Wilson [Abstract]). Regarding claim 2, Xu in view of Wilson teaches all of the limitations of claim 1, upon which claim 2 depends. Additionally, Xu teaches wherein receiving the request to generate the new content item comprises receiving a text prompt describing content to depict within the new content item (FIG. 3 [0018] “the language-guided image-editing system utilizes the cyclically trained GAN to modify an image based on a natural language modification request for the image (e.g., text-based request”). Regarding claim 3, Xu in view of Wilson teaches all of the limitations of claim 2, upon which claim 3 depends. Additionally, Xu teaches wherein generating the custom content item comprises utilizing the content generation model to synthesize one or more features represented in the content of the text prompt (FIG. 2A, examiner interprets 206 to potentially be a text prompt at shown in [0018]). Regarding claim 5, Xu in view of Wilson teaches all of the limitations of claim 1, upon which claim 5 depends. Additionally, Xu teaches receiving, from the client device associated with the user account, a selection of user-selected content items from the one or more content items; and in response to the request to generate the new content item, generating the custom content item utilizing the content generation model to synthesize the at least one attribute associated with the user-selected content items within the custom content item (FIG. 2A, image 202 is being requested for modification, therefore it is reasonable to assume it was selected by the user from one or more content items). Regarding claim 9, Xu in view of Wilson teaches all of the limitations of claim 1, upon which claim 9 depends. Additionally, Xu teaches wherein the custom content item comprises an image, a video, or a text document (FIG. 2A, 202). Regarding claim 10, Xu in view of Wilson teaches all of the limitations of claim 1, upon which claim 10 depends. Additionally, Wilson teaches modifying the parameters of the content generation model to generate a custom content generation model that synthesizes the visual characteristic of the one or more content items associated with the user account ([0023] “embodiments can receive a first image (e.g., a screenshot) or set of images (e.g., a video feed) of a first user that indicates the personal style of the first user, such as a hair style or makeup style of the first user. The first image can then be fed to one or more machine learning models in order to learn and capture the personal style of the first user. For example, particular models (e.g., a modified Generative Adversarial Network (GAN)) can perform several training epochs to learn that the first user always wears blue eyeshadow with blue lipstick at a particular pattern”; [0028]) Regarding claim 11, Xu teaches a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computing device to: identify, for a user account of a content management system, one or more content items associated with the user account (FIG. 2A, 202, 208 [0042] “the language-guided image-editing system 106 receives an image 202 displayed within a client device 204”, examiner interprets 202 as an example of a content item; [0046] “the language-guided image-editing system 106 utilizes the cycle augmented generative adversarial neural network 208 to generate a modified image that reflects the modification request from the natural language text 206 within the image 202”, examiner interprets 208 to be the content generation model; FIG. 1, 112, 106, [0035] “the client device 110 is operated by a user to perform a variety of functions (e.g., via a digital graphics application 112)”, examiner interprets the user account to be linked to the graphics application and 106 to be the content management system.); receive, from a client device associated with the user account, a request to generate a new content item based on a content description corresponding to the request (FIG. 2A, 206 (request), 210 (new content), 204 (client device); FIG. 2A, 206, [0045] “a visual modification request includes a natural language text instruction or command that specifies one or more editing operations (e.g., brightness, hue, tone, saturation, contrast, exposure, removal), one or more adjustment types (e.g., increase, decrease, change, set), and/or one or more degrees of adjustments (e.g., a lot, a little, a numerical value)”, examiner interprets text instruction/command as content description); and in response to the request to generate the new content item, generate a custom content item by utilizing a content generation model to synthesize content that depicts the content description from the request with at least one attribute from the one or more content items within the custom content item, (FIG. 2A, 210, [0046] “the language-guided image-editing system 106 utilizes a cycle augmented generative adversarial neural network 208 with the natural language text 206 (e.g., a visual modification request to “increase brightness and contrast for the image”) and the image 202 to generate the modified image 210”, in this example, examiner interprets the attribute to be the brightness and contrast). Xu fails to teach a content generation model trained utilizing training content items to generate new content items, wherein the training content items are different from the one or more content items; one or more content items associated with the user account; wherein the at least one attribute comprises a visual characteristic of the one or more content items associated with the user account However, Wilson teaches a content generation model trained utilizing training content items to generate new content items, wherein the training content items are different from the one or more content items (FIG. 6, [0023] “embodiments can receive a first image (e.g., a screenshot) or set of images (e.g., a video feed) of a first user that indicates the personal style of the first user, such as a hair style or makeup style of the first user. The first image can then be fed to one or more machine learning models in order to learn and capture the personal style of the first user. For example, particular models (e.g., a modified Generative Adversarial Network (GAN)) can perform several training epochs to learn that the first user always wears blue eyeshadow with blue lipstick at a particular pattern”, FIG. 3, 302, 302, [0065]); one or more content items associated with the user account (FIG. 5D, 540, 541) wherein the at least one attribute comprises a visual characteristic of the one or more content items associated with the user account ([0028] “these modified models can use multiple discriminators that distinguish whether generated images include the personal image style and are real or fake so as to make the output image as realistic as possible”; FIG. 5D, 551, examiner interprets the elements of 551 as visual characteristics associated with the user’s account) Xu in view of Wilson is considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques performing language guided digital image editing utilizing a cycle-augmentation generative-adversarial neural network (CAGAN) that is augmented using a cross-modal cyclic mechanism of Xu with the technique of training a generation model to produce attribute specific content taught by Wilson in order to improve techniques of applying data indicative of a personal style to a feature of a user represented in one or more images based on determining or estimating the personal style (see Wilson [Abstract]). Regarding claim 12, Xu in view of Wilson teaches all of the limitations of claim 11, upon which claim 12 depends. Additionally, Xu teaches a text prompt comprising one or more features to depict within the new content item as the content description; or one or more user-selected menu options comprising the one or more features to depict within the new content item as the content description (FIG. 2A, 206, [0045] “a visual modification request includes a natural language text instruction or command that specifies one or more editing operations (e.g., brightness, hue, tone, saturation, contrast, exposure, removal), one or more adjustment types (e.g., increase, decrease, change, set), and/or one or more degrees of adjustments (e.g., a lot, a little, a numerical value)”, examiner interprets text instruction/command as content description). Regarding claim 13, Xu in view of Wilson teaches all of the limitations of claim 12, upon which claim 13 depends. Additionally, Xu teaches further comprising instructions that, when executed by the at least one processor, cause the computing device to: identify one or more feature weights associated with the one or more features of the content description (FIG. 6, 602, [0087] “the language-guided image-editing system 106 utilizes a natural language text t (e.g., “brighten the image and remove the woman on the left”) with a language encoder 602 to generate a natural language embedding h…106 generates a visual feature map V from an image x utilizing an image encoder 606”; [0088] “the modified visual feature map V′ is generated through the scaling and shifting of the visual feature map V using a reweighted natural language embedding that indicates degrees of editing within different locations of an image”); and generate the custom content item by utilizing the content generation model to synthesize the content that depicts the content description from the request based on the one or more feature weights ([0089] “the language-guided image-editing system 106 utilizes a cycle augmented generative adversarial neural network 610 (as described above) to generate a modified image {tilde over (x)} based on the modified visual feature map V”). Regarding claim 16, Xu teaches a system comprising: at least one processor (FIG. 13, 1302); and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor (FIG. 13, 1306), cause the system to: identify, for a user account of a content management system, a set of content items for fine-tuning parameters of a content generation model trained to generate new content items, (FIG. 2A, 202, 208 [0042] “the language-guided image-editing system 106 receives an image 202 displayed within a client device 204”, examiner interprets 202 as an example of a content item; [0046] “the language-guided image-editing system 106 utilizes the cycle augmented generative adversarial neural network 208 to generate a modified image that reflects the modification request from the natural language text 206 within the image 202”, examiner interprets 208 to be the content generation model; FIG. 1, 112, 106, [0035] “the client device 110 is operated by a user to perform a variety of functions (e.g., via a digital graphics application 112)”, examiner interprets the user account to be linked to the graphics application and 106 to be the content management system; [0035] “the client device 110 performs functions such as, but not limited to, capturing and storing images (or videos), displaying images (or other content), and modifying (or editing) the images (or videos).”, examiner interprets more than one media content as set.); modify the parameters of the content generation model based on the set of content items, (FIG. 3, [0038] “the language-guided image-editing system 106 on the server device(s) 102 learns parameters for one or more neural networks…the client device 110 obtains (e.g., downloads) the language-guided image-editing system 106 with one or more neural network with the learned parameters from the server device. modified digital images in accordance with natural language requests independent from the server device(s) 102” [0054] “[0054] For example, FIG. 3 illustrates the language-guided image-editing system 106 learning parameters of an editing description neural network”); receive, from a client device associated with the user account, a request to generate a new content item based on a user selection of a subset of content items from the set of content items (FIG. 2A, 206 (request), 210 (new content), 204 (client device); examiner interprets the selection of more than one media content from a larger group as a subset); and in response to the request to generate the new content item and the user selection of the subset of content items, generate a custom content item utilizing the content generation model to synthesize at least one attribute associated with the subset of content items within the custom content item (FIG. 2A, 210, [0046] “the language-guided image-editing system 106 utilizes a cycle augmented generative adversarial neural network 208 with the natural language text 206 (e.g., a visual modification request to “increase brightness and contrast for the image”) and the image 202 to generate the modified image 210”, in this example, examiner interprets the attribute to be the brightness and contrast). Xu fails to teach a content generation model trained utilizing training content items to generate new content items, wherein the training content items are different from the one or more content items; wherein the at least one attribute comprises a visual characteristic of the one or more content items associated with the user account, and one or more content items associated with the user account However, Wilson teaches a content generation model trained utilizing training content items to generate new content items, wherein the training content items are different from the one or more content items (FIG. 6, [0023] “embodiments can receive a first image (e.g., a screenshot) or set of images (e.g., a video feed) of a first user that indicates the personal style of the first user, such as a hair style or makeup style of the first user. The first image can then be fed to one or more machine learning models in order to learn and capture the personal style of the first user. For example, particular models (e.g., a modified Generative Adversarial Network (GAN)) can perform several training epochs to learn that the first user always wears blue eyeshadow with blue lipstick at a particular pattern”, FIG. 3, 302, 302, [0065]); wherein the at least one attribute comprises a visual characteristic of the one or more content items associated with the user account ([0028] “these modified models can use multiple discriminators that distinguish whether generated images include the personal image style and are real or fake so as to make the output image as realistic as possible”; FIG. 5D, 551, examiner interprets the elements of 551 as visual characteristics associated with the user’s account), and one or more content items associated with the user account (FIG. 5D, 540, 541) Xu in view of Wilson is considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques performing language guided digital image editing utilizing a cycle-augmentation generative-adversarial neural network (CAGAN) that is augmented using a cross-modal cyclic mechanism of Xu with the technique of training a generation model to produce attribute specific content taught by Wilson in order to improve techniques of applying data indicative of a personal style to a feature of a user represented in one or more images based on determining or estimating the personal style (see Wilson [Abstract]). Regarding claim 17, Xu in view of Wilson teaches all of the limitations of claim 16, upon which claim 17 depends. Additionally, Xu teaches wherein receiving the request to generate the new content item comprises receiving a text prompt describing content to depict within the new content item and further comprising instructions that, when executed by the at least one processor, cause the system to generate the custom content item utilizing the content generation model to synthesize one or more features represented in the content of the text prompt (FIG. 3 [0018] “the language-guided image-editing system utilizes the cyclically trained GAN to modify an image based on a natural language modification request for the image (e.g., text-based request”; FIG. 2B, examiner interprets 214 to potentially be a text prompt at shown in [0018]). Claims 4 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Wilson, as showin in claim 1 above, in further view of Nichoson et al. US 10860196 B1 (hereinafter Nichoson). Regarding claim 4, Xu in view of Wilson teaches all of the limitations of claim 1, upon which claim 4 depends. Xu in view of Wilson fails to teach further comprising providing, for display within a graphical user interface of the client device, the custom content item and one or more selectable options to share the custom content item, store the custom content item, or modify the custom content item. However, Nichoson teaches further comprising providing, for display within a graphical user interface of the client device, the custom content item and one or more selectable options to share the custom content item, store the custom content item, or modify the custom content item (FIG. 10, 1004, 1006, 302) Xu in view of Wilson in view of Nichoson are considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of editing digital content of Xu in view of Wilson with the technique of providing various selectable options taught by Nichoson in order to improve techniques for edit experience for transformation of digital content are delivered in a digital medium environment (see Nichoson [Column 1, line 54-56]). Regarding claim 19, Xu in view of Wilson teaches all of the limitations of claim 16, upon which claim 19 depends. Xu in view of Wilson fails to teach further comprising instructions that, when executed by the at least one processor, cause the system to: receive, from the client device associated with the user account, user feedback for the custom content item; and modify the parameters of the content generation model based on the set of content items utilizing the user feedback to generate an updated content generation model. However, Nichoson teaches further comprising instructions that, when executed by the at least one processor, cause the system to: receive, from the client device associated with the user account, user feedback for the custom content item (FIG. 17, 1702, [Column 15, line 50-57] “selecting the feedback control 1702 causes a feedback experience (e.g., a digital feedback form) to be presented that enables the user to input feedback about the remixed image 1506. Generally, the feedback may indicate a user impression of the remixed image 1506, such as positive or negative feedback. The feedback may then be published, such as to the image editing service 104 and/or to the user that generated the remixed image 1506”); and modify the parameters of the content generation model based on the set of content items utilizing the user feedback to generate an updated content generation model ([Column 17, line 13- Column 17, line 19] “if multiple similar visual images are identified, a particular similar image may be selected for use based on a particular criteria, such as feedback received regarding the particular similar image. For instance, if the particular similar image is determined to have more positive feedback than the other identified similar images (e.g., more “likes”), the particular similar image is selected”). Xu in view of Wilson in view of Nichoson are considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of editing digital content of Xu in view of Wilson with the technique of providing various selectable options taught by Nichoson in order to improve techniques for edit experience for transformation of digital content are delivered in a digital medium environment (see Nichoson [Column 1, line 54-56]). Xu in view of Wilson in view of Nichoson are considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of editing digital content of Xu in view of Wilson with the technique of providing various selectable options taught by Nichoson in order to improve techniques for edit experience for transformation of digital content are delivered in a digital medium environment (see Nichoson [Column 1, line 54-56]). Claims 6-8, 14-15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Wilson in view of Brandt et al. US 20240169624 A1 (hereinafter Brandt). Regarding claim 6, Xu in view of Wilson teaches all of the limitations of claim 1, upon which claim 6 depends. Xu in view of Wilson fails to teach identifying, from an additional user account of the content management system, one or more additional content items for fine-tuning the parameters of the content generation model trained to generate the new content items; modifying the parameters of the content generation model based on the at least one attribute associated with the one or more content items and at least one additional attribute associated with the one or more additional content items; and generating the custom content item utilizing the content generation model to synthesize the at least one attribute associated with the one or more content items and the at least one additional attribute associated with the one or more additional content items within the custom content item. However, Brandt teaches identifying, from an additional user account of the content management system, one or more additional content items for fine-tuning the parameters of the content generation model trained to generate the new content items ([0078] FIG. 1, “the image editing system 104 provides functionality by which a client device (e.g., a user of one of the client devices 110a-110n) generates, edits, manages, and/or stores digital images”, examiner interprets the additional client devices to be linked to additional users); modifying the parameters of the content generation model based on the at least one attribute associated with the one or more content items and at least one additional attribute associated with the one or more additional content items (FIG. 3, [0038] “the language-guided image-editing system 106 on the server device(s) 102 learns parameters for one or more neural networks…the client device 110 obtains (e.g., downloads) the language-guided image-editing system 106 with one or more neural network with the learned parameters from the server device. modified digital images in accordance with natural language requests independent from the server device(s) 102” [0054] “[0054] For example, FIG. 3 illustrates the language-guided image-editing system 106 learning parameters of an editing description neural network”); and generating the custom content item utilizing the content generation model to synthesize the at least one attribute associated with the one or more content items and the at least one additional attribute associated with the one or more additional content items within the custom content item (FIG. 22-24, FIG. 30, [0370] “the scene-based image editing system 106 modifies the visual indication 2218 of the selection to indicate that the object 2208b has been added to the selection”, examiner interprets adding objects (with their own unique attributes) to a potentially modified image as generating custom content with different attributes.) Xu in view of Wilson in view of Brandt are considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of modifying digital content of Xu in view of Wilson with the technique of generating custom content from additional users taught by Brandt in order to improve techniques for modifying digital images via scene-based editing using image understanding facilitated by artificial intelligence (see Brandt [Abstract]). Regarding claim 7, Xu in view of Wilson in view of Brandt teaches all of the limitations of claim 6, upon which claim 7 depends. Additionally, Brandt teaches further comprising providing access to the custom content item to the user account and the additional user account (FIG. 1, [0078-0079] “a client device sends a digital image to the image editing system 104 hosted on the server(s) 102 via the network 108. The image editing system 104 then provides options that the client device may use to edit the digital image, store the digital image, and subsequently search for, access, and view the digital image.”) Regarding claim 8, Xu in view of Wilson teaches all of the limitations of claim 1, upon which claim 8 depends. Xu in view of Wilson fails to teach receiving, from the client device associated with the user account, an attribute weight for the at least one attribute; and generating the custom content item by utilizing the content generation model to synthesize the at least one attribute associated with the one or more content items within the custom content item based on the attribute weight. However, Brandt teaches receiving, from the client device associated with the user account, an attribute weight for the at least one attribute (FIG. 21B-21C, [0299] “a real-world class description graph provides object attributes assigned to a given object, such as shape, color, material from which the object is made, weight of the object, weight the object can support, and/or various other attributes determined to be useful in subsequently modifying a digital image”); and generating the custom content item by utilizing the content generation model to synthesize the at least one attribute associated with the one or more content items within the custom content item based on the attribute weight ([0355] “the scene-based image editing system 106 detects a user interaction with the slider element 2116 of the slider bar 2114, increasing the degree to which the corresponding object attribute appears in the digital image”) Xu in view of Wilson in view of Brandt are considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of modifying digital content of Xu in view of Wilson with the technique of generating custom content from additional users taught by Brandt in order to improve techniques for modifying digital images via scene-based editing using image understanding facilitated by artificial intelligence (see Brandt [Abstract]). Regarding claim 14, Xu in view of Wilson teaches all of the limitations of claim 11, upon which claim 14 depends. Xu in view of Wilson fails to teach further comprising instructions that, when executed by the at least one processor, cause the computing device to: receive, from the client device associated with the user account, a selection of user-selected content items from the one or more content items; and generate the custom content item utilizing the content generation model to synthesize the at least one attribute associated with the user-selected content items within the custom content item. However, Brandt teaches further comprising instructions that, when executed by the at least one processor, cause the computing device to: receive, from the client device associated with the user account, a selection of user-selected content items from the one or more content items (FIG. 25, [0395] “the scene-based image editing system 106 detects a user interaction selecting the object 2508e”); and generate the custom content item utilizing the content generation model to synthesize the at least one attribute associated with the user-selected content items within the custom content item (FIG. 22-24, FIG. 30, [0370] “the scene-based image editing system 106 modifies the visual indication 2218 of the selection to indicate that the object 2208b has been added to the selection”, examiner interprets the selected items can be added to a modified image) Xu in view of Wilson in view of Brandt are considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of modifying digital content of Xu in view of Wilson with the technique of generating custom content from additional users taught by Brandt in order to improve techniques for modifying digital images via scene-based editing using image understanding facilitated by artificial intelligence (see Brandt [Abstract]). Regarding claim 15, Xu in view of Wilson teaches all of the limitations of claim 11, upon which claim 15 depends. Additionally, Xu teaches generate the custom content item utilizing the content generation model to synthesize the content that depicts the content description from the request with the at least one attribute associated with the one or more content items and the at least one additional attribute associated with the one or more additional content items within the custom content item (FIG. 2A, 206, [0045] “a visual modification request includes a natural language text instruction or command that specifies one or more editing operations (e.g., brightness, hue, tone, saturation, contrast, exposure, removal), one or more adjustment types (e.g., increase, decrease, change, set), and/or one or more degrees of adjustments (e.g., a lot, a little, a numerical value)”, examiner interprets text instruction/command as content description); Xu in view of Wilson fails to teach further comprising instructions that, when executed by the at least one processor, cause the computing device to: identify, from an additional user account of the content management system, one or more additional content items associated with the additional user account; modify parameters of a content generation model based on the at least one attribute associated with the one or more content items and at least one additional attribute associated with the one or more additional content items; provide access to the custom content item to the user account and the additional user account. However, Brandt teaches further comprising instructions that, when executed by the at least one processor, cause the computing device to: identify, from an additional user account of the content management system, one or more additional content items associated with the additional user account ([0078] FIG. 1, “the image editing system 104 provides functionality by which a client device (e.g., a user of one of the client devices 110a-110n) generates, edits, manages, and/or stores digital images”, examiner interprets the additional client devices to be linked to additional users); modify parameters of a content generation model based on the at least one attribute associated with the one or more content items and at least one additional attribute associated with the one or more additional content items; provide access to the custom content item to the user account and the additional user account (FIG. 3, [0038] “the language-guided image-editing system 106 on the server device(s) 102 learns parameters for one or more neural networks…the client device 110 obtains (e.g., downloads) the language-guided image-editing system 106 with one or more neural network with the learned parameters from the server device. modified digital images in accordance with natural language requests independent from the server device(s) 102” [0054] “[0054] For example, FIG. 3 illustrates the language-guided image-editing system 106 learning parameters of an editing description neural network”) provide access to the custom content item to the user account and the additional user account (FIG. 1, [0078-0079] “a client device sends a digital image to the image editing system 104 hosted on the server(s) 102 via the network 108. The image editing system 104 then provides options that the client device may use to edit the digital image, store the digital image, and subsequently search for, access, and view the digital image.”). Xu in view of Wilson in view of Brandt are considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of modifying digital content of Xu in view of Wilson with the technique of generating custom content from additional users taught by Brandt in order to improve techniques for modifying digital images via scene-based editing using image understanding facilitated by artificial intelligence (see Brandt [Abstract]). Regarding claim 18, Xu in view of Wilson teaches all of the limitations of claim 16, upon which claim 18 depends. Xu in view of Wilson fails to teach further comprising instructions that, when executed by the at least one processor, cause the system to: receive, from the client device associated with the user account, one or more content item weights for the subset of content items; and generate the custom content item by utilizing the content generation model to synthesize the at least one attribute associated with the subset of content items within the custom content item based on the one or more content item weights. However, Brandt teaches further comprising instructions that, when executed by the at least one processor, cause the system to: receive, from the client device associated with the user account, one or more content item weights for the subset of content items (FIG. 21A-21C, [0334] “the object modification neural network 1806 provides a soft grounding for textual queries via a weighted summation of the visual feature maps 1810. In some cases, the object modification neural network 1806 uses the textual features 1814a-1814b (represented as t∈custom-character.sup.1024×1) as weights to compute the weighted summation of the visual feature”); and generate the custom content item by utilizing the content generation model to synthesize the at least one attribute associated with the subset of content items within the custom content item based on the one or more content item weights ([0229] “a real-world class description graph provides object attributes assigned to a given object, such as shape, color, material from which the object is made, weight of the object, weight the object can support, and/or various other attributes determined to be useful in subsequently modifying a digital image”). Xu in view of Wilson in view of Brandt are considered to be analogous to the claimed invention because both are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of modifying digital content of Xu in view of Wilson with the technique of generating custom content from additional users taught by Brandt in order to improve techniques for modifying digital images via scene-based editing using image understanding facilitated by artificial intelligence (see Brandt [Abstract]). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Wilson in view of Nichoson in view of Brandt. Regarding claim 20, Xu in view of Wilson in view of Nichoson teaches all of the limitations of claim 19, upon which claim 20 depends. Xu in view of Wilson in view of Nichoson fails to teach further comprising instructions that, when executed by the at least one processor, cause the system to generate a version history for the content generation model by storing, for the user account of the content management system, the content generation model and the updated content generation model However, Brandt teaches further comprising instructions that, when executed by the at least one processor, cause the system to generate a version history for the content generation model by storing, for the user account of the content management system, the content generation model and the updated content generation model ([0497] “the scene-based image editing system 106 automatically modifies the digital image 4306 (e.g., adjusts the brightness) using user preferences or user history. For instance, in some cases, the scene-based image editing system 106 tracks the settings typically used by the client device 4304 for a particular modification and implements those settings in response to selection of the selectable option”) Xu in view of Wilson in view of Nichoson in view of Brandt are considered to be analogous to the claimed invention because all are the same field of digital content transformation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the digital editing techniques of Xu in view of Wilson in view of Nichoson with the technique of generating a version history taught by Brandt in order to improve techniques for modifying digital images via scene-based editing using image understanding facilitated by artificial intelligence (see Brandt [Abstract]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kim et al. (US 20210200824 A1) teaches a method and apparatus for personalizing a content recommendation model are provided. The method includes obtaining a first content recommendation model used to recommend content to a user of the electronic device, personalizing the first content recommendation model based on a content use history of the user, receiving a second content recommendation model from a server, receiving a personalization model for personalizing the second content recommendation model from the server, personalizing the second content recommendation model by using input/output data of the personalized first content recommendation model and the personalization model, and providing a content recommendation service to the user by using the personalized second content recommendation model. Chang et al. (US 8447608 B1) teaches technologies relating to generating custom language models for audio content. In some implementations, a computer-implemented method is provided that includes the actions of receiving a collection of source texts; identifying a type from a collection of types for each source text, each source text being associated with a particular type; generating, for each identified type, a type-specific language model using the source texts associated with the respective type; and storing the language models. Denison (US 12346993 B2) teaches a methods and system for fine tuning an image generated by an image generation artificial intelligence process includes receiving a generated image for a user prompt. The generated image is analyzed to identify image features included within. The identified image features are presented on a user interface for user selection for fine tuning. Selection of an image feature at the user interface is detected and an adjusted image is generated by fine tuning the selected image feature in accordance to tuning comments so that the image feature exhibits a style expressed by the user. The adjusted image is returned to the client device for rendering. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZEESHAN SHAIKH whose telephone number is (703)756-1730. The examiner can normally be reached Monday-Friday 7:30AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZEESHAN MAHMOOD SHAIKH/Examiner, Art Unit 2658 /RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Feb 03, 2023
Application Filed
Jul 12, 2025
Non-Final Rejection — §101, §103
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Examiner Interview Summary
Nov 14, 2025
Response Filed
Feb 01, 2026
Final Rejection — §101, §103
Apr 01, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579373
SYSTEM AND METHOD FOR SYNTHETIC TEXT GENERATION TO SOLVE CLASS IMBALANCE IN COMPLAINT IDENTIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12555575
Wakeup Indicator Monitoring Method, Apparatus and Electronic Device
2y 5m to grant Granted Feb 17, 2026
Patent 12518090
LOGICAL ROLE DETERMINATION OF CLAUSES IN CONDITIONAL CONSTRUCTIONS OF NATURAL LANGUAGE
2y 5m to grant Granted Jan 06, 2026
Patent 12511318
MULTI-SYSTEM-BASED INTELLIGENT QUESTION ANSWERING METHOD AND APPARATUS, AND DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12512088
METHOD AND SYSTEM FOR USER-INTERFACE ADAPTATION OF TEXT-TO-SPEECH SYNTHESIS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+55.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month