DETAILED ACTION
This action is responsive to the Request for Continued Examination filed on 01/05/2026. Claims 2, 12, 20-24, and 26 have been canceled. Claims 1, 8, 11, and 18 have been amended. Claims 1, 3-11, 13-19, and 25 are pending in the case. Claims 1 and 11 are independent claims.
Claim Interpretations/Examiner’s Notes
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. Further, during examination, the claims must be interpreted as broadly as their terms reasonably allow (see In re American Academy of Science Tech Center, 367 F.3d 1359, 1369, 70 U.S.P.Q.2d 1827, 1834 (Fed. Cir. 2004)). Also, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims (see In re Van Geuns, 988 F.2d 1181, 26 U.S.P.Q.2d 1057 (Fed. Cir. 1993)). The following is provided to aid the reader in understanding how at least some claim elements (also commonly referred to as claim limitations), as a whole, have been considered in the rejections below:
“indicating” [e.g. claims 3, 6, 13, and 16] = for purposes of prior art analysis, it is noted that the displaying of a graphical user interface element itself carries patentable weight, but the intended use/result of what that element may or may not have conveyed/“indicated” to a human observer amounts to non-functional descriptive material,1 and therefore lacks considerable patentable weight for purposes of prior art analysis.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-11, 13-19, and 25 are rejected under 35 U.S.C. § 103 as being unpatentable over US Patent Application Pub. No. 2023/0409631 (hereinafter “Futterman”), in view of US Patent Application Pub. No. 2021/0089759 (hereinafter “Todorov”), in further view of US Patent Application Pub. No. 2022/0309131 (hereinafter “Nguyen”).
As to claims 1, 11, and 25, Futterman shows a server device providing a social media platform [¶ 20], a method [¶ 11], and a concomitant non-transitory computer-readable storage medium [¶ 01], comprising:
one or more processors configured to execute instructions stored in associated memory [¶ 21] to:
send a first portion of the instructions to a client device [¶ 19] to cause the client device to display a graphical user interface (GUI) of the social media platform [e.g. the server sends first instructions to an endpoint/client device to display a social media graphical user interface (¶ 20)];
receive at least one image of a face of a user of the client device [e.g. at least one image of a face of a user of the client device is received (¶ 31)] and a selection of at least one predetermined style via the GUI [e.g. a selection of at least one predetermined criteria/style (¶¶ 14, 32-33, & 39-41)];
generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; send a second portion of the instructions to the client device to cause the client device to display in the GUI a first selector for requesting generation of a first plurality of artificial intelligence (AI) profile pictures via a first Al model, and a second selector for requesting generation of a second plurality of Al profile pictures via a second Al model; receive, from the client device, a user selection of the first selector or the second selector via the GUI; generate {…} an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style; input the input feature vector into the first AI model or the second AI model based on the user selection of the first selector or the second selector, and generate, by the first AI model or the second AI model, the first plurality of AI profile pictures or the second plurality of AI profile pictures; and send the first plurality of AI profile pictures or the second plurality of AI profile pictures to the client device [Futterman is operable to generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user (¶¶ 31 & 39). Futterman teaches displaying multiple selectors, wherein each selector enables the generation of a different AI profile picture destined for a corresponding platform (¶ 33). Futterman also shows that each platform may be associated with its own designated AI model for purposes of AI profile picture generation (¶ 24). Futterman additionally shows that by using the at least one image and the at least one predetermined criteria/style as an input vector, a plurality of artificial intelligence (AI) profile pictures are generated via a selected AI model/platform and sent to the client device (¶¶ 23-26 & 33-44)].
As indicated above, Futterman shows its own version of generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user and to generate an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style. Nonetheless, Futterman does not appear to explicitly recite doing so via a “text encoder” as apparently intended. In an analogous art, Todorov shows:
generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user [A set of embeddings (¶ 61) are generated based on visual features of the face, the set of embeddings representing the face of the user (¶¶ 06, 11, 50, 57, & 61).]; {and}
generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style [An input feature/multi-dimensional vector (¶¶ 06, 13, 36-37, 48, & 53-55) is generated by a text encoder (¶¶ 23, 28, & 54-59) based on the set of embeddings generated based on the visual features of the face (¶ 61), which represent the face of the user, and a user input selecting the at least one predetermined style/trait modification (¶¶ 13, 36-37, 48, & 53-55)];
One of ordinary skill in the art, having the teachings of Futterman and Todorov before them prior to the effective filing date of the claimed invention, would have been motivated to incorporate Todorov’s text encoder teachings into Futterman. The rationale for doing so would have been that Todorov’s “approach allows for automatically, quickly, and realistically modifying photos of faces” (Todorov: ¶ 21). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Futterman and Todorov (hereinafter, the “Futterman-Todorov” combination) in order to obtain at least the above aspects of the invention as recited in claims 1, 11, and 25.
Moreover, as indicated above, Futterman (of the Futterman-Todorov combination) shows at the very least indirect AI model selectors (Futterman: ¶¶ 24 & 33). Nonetheless, it is potentially conceded that Futterman-Todorov does not appear to explicitly recite selecting the AI models directly as apparently intended. In an analogous art, Nguyen shows:
send a second portion of the instructions to the client device to cause the client device to display in the GUI a first selector for requesting generation of a first plurality of artificial intelligence (AI) profile pictures via a first Al model, and a second selector for requesting generation of a second plurality of Al profile pictures via a second Al model; receive, from the client device, a user selection of the first selector or the second selector via the GUI; {…} and generate, by the first AI model or the second AI model, the first plurality of AI profile pictures or the second plurality of AI profile pictures [Nguyen shows displaying in the GUI a first selector for requesting generation of a first plurality of AI pictures (that may be have the intended use of being assigned to a profile) via a first Al model, and a second selector for requesting generation of a second plurality of Al profile pictures via a second Al model that are selectable to generate a corresponding plurality of AI profile pictures via the selected AI model (Nguyen: ¶ 90).];
One of ordinary skill in the art, having the teachings of Futterman, Todorov, and Nguyen before them prior to the effective filing date of the claimed invention, would have been motivated to incorporate Nguyen’s AI model selectors into Futterman-Todorov. The rationale for doing so would have been to improve user experience by giving the user more, clearer control over exactly how their desired AI pictures will be generated. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Futterman, Todorov, and Nguyen (hereinafter, the “Futterman-Todorov-Nguyen” combination) in order to obtain the invention as recited in claims 1, 11, and 25.
As to dependent claims 3 and 13, Futterman-Todorov-Nguyen further shows:
wherein each of the first selector and the second selector includes a time parameter, the time parameter on the first selector and the time parameter on the second selector indicating that the generation of the first plurality of AI profile pictures via the first Al model is slower than the generation of the second plurality of AI profile pictures via the second Al model, and each of the first selector and the second selector includes a number indicating a quantity of AI profile pictures to be generated, the number on the first selector being greater than the number on the second selector [As shown above, Futterman-Todorov-Nguyen shows how the first and second selectors may comprise a plurality of parameters and indicators (see, for example, Futterman: ¶ 33; Todorov: ¶¶ 06, 27-29, & 54; and Nguyen: ¶ 90). Apart from the selectors themselves, the rest of claims 3 and 13 appears to be drawn to specific kinds of “parameters” (like “time” parameters) and/or “indicators” per se tacked on to the first and second selectors and meant to convey meaning to a human observer. As illustrated above, the intended use/result of what any given element may or may not have conveyed/“indicated” to a human observer amounts to non-functional descriptive material 2 (especially when the specification itself appears to also illustrate these parameters and indications as mere alphanumeric labels intended purely to convey meaning to a human reader, see Specification: items 90 and 92 in fig. 5), and therefore lacks considerable patentable weight for purposes of prior art analysis. Additionally/alternatively, it would have been obvious to adapt Futterman-Todorov-Nguyen’s existing parameters and indicators to convey additional information to the reader, such as a “time” and/or a ”number” as apparently intended, with the intention that they would also “indicate” or convey the same intended results as currently claimed.].
As to dependent claims 4 and 14, Futterman-Todorov-Nguyen further shows:
wherein a first parameter size of the first AI model is larger than a second parameter size of the second AI model [e.g. the first AI model has a larger parameter size than the second AI model (Futterman: ¶¶ 23-26 & 34-44)].
As to dependent claims 5 and 15, Futterman-Todorov-Nguyen further shows:
wherein the one or more processors are further configured to, upon receiving selection of an AI profile picture from the first plurality of AI profile pictures or the second plurality of AI profile pictures via the GUI, designate the selected AI profile picture as a current profile picture of the user on the social media platform [“{…} all of the new images of the plurality of new images (and, optionally, all of the images of the plurality of images acquired in step 204) may be presented to a user along with the corresponding scores for the images. The user may then manually select an image as the profile image for the subject, guided by the scores which may help to identify the potential “best” image(s) for the target online platform. {…}” (Futterman: ¶ 44)].
As to dependent claims 6 and 16, Futterman-Todorov-Nguyen further shows:
wherein the GUI is configured to display the selected AI profile picture with a watermark or frame indicating that the selected AI profile picture was generated with AI [e.g. the GUI is configured to display a selected AI profile picture with a frame/watermark indicating that the selected AI profile picture was generated with AI (Futterman: ¶¶ 23-26 & 34-44). Moreover, as indicated above, what the display may or may not “indicate” or convey to a human observer (like “indicating that the selected AI profile picture was generated with AI”) amounts to an intended result of otherwise non-functional descriptive material (and therefore would not appear to carry significant patentable weight for purposes of prior art analysis).].
As to dependent claims 7 and 17, Futterman-Todorov-Nguyen further shows:
wherein the GUI is configured to generate a post under an account of the user featuring the selected AI profile picture [e.g. the GUI is configured to generate a post under an account of the user featuring a selected AI profile picture (Futterman: ¶¶ 15, 22, & 42)].
As to dependent claims 8 and 18, Futterman-Todorov-Nguyen further shows:
send a third portion of the instructions to the client device to cause the client device to display, in the GUI, a plurality of example images representing a plurality of predetermined styles for the user to select; and receive, from the client device, a selection of at least one of the example images by the user as the selection of the at least one predetermined style [e.g. the GUI is configured to display a plurality of example images representing a plurality of predetermined criteria/styles, and selection of at least one of the example images by the user constitutes the selection of the at least one predetermined criteria/style (Futterman: ¶¶ 14, 32-33, & 39-41)].
As to dependent claims 9 and 19, Futterman-Todorov-Nguyen further shows:
wherein the one or more processors are configured to, responsive to receiving a request from the user for further generation: generate an additional plurality of AI profile pictures via the first AI model or the second AI model; and send the additional plurality of AI profile pictures to the client device [e.g. the AI profile picture generation procedure may be repeated upon request/user desire (Futterman: ¶¶ 23-26 & 34-44)].
As to dependent claim 10, Futterman-Todorov-Nguyen further shows:
wherein the GUI allows the user to request the further generation up to a predetermined number of times [e.g. the GUI allows the user to repeat the AI profile picture generation procedure at least a predetermined number of times (Futterman: ¶¶ 23-26 & 34-44)].
Response to Arguments
“Applicant disagrees with the Office action's interpretation that the word "indicating" in claims 3, 6, 13, and 16 is intended use that amounts to non-functional descriptive material, and therefore lacks considerable patentable weight. Applicant submits that the word "indicating" as used in claims 3, 6, 13, and 16 carries patentable weight, and respectfully requests proper consideration by the Office.”
The Office respectfully disagrees. Futterman-Todorov-Nguyen shows how the first and second selectors may comprise a plurality of parameters and indicators (see, for example, Futterman: ¶ 33; Todorov: ¶¶ 06, 27-29, & 54; and Nguyen: ¶ 90). Apart from the selectors themselves, the rest of claims 3, 6, 13, and 16 appears to be drawn to specific kinds of “parameters” (like “time” parameters) and/or “indicators” per se tacked on to the first and second selectors and meant to convey meaning to a human observer. As illustrated above, the intended use/result of what any given element may or may not have conveyed/“indicated” to a human observer amounts to non-functional descriptive material 3 (especially when the specification itself appears to also illustrate these parameters and indications as mere alphanumeric labels intended purely to convey meaning to a human reader, see Specification: items 90 and 92 in fig. 5), and therefore lacks considerable patentable weight for purposes of prior art analysis.
Applicant’s prior art arguments have been fully considered but are moot in view of the new grounds of rejection presented above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure. Applicants are required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action.
Inventor
Document ID
Relevance
GONG; Xiaobo
US 20210074046 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
Kalarot; Ratheesh et al.
US 20230162407 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
Qu; Hui et al.
US 20230316474 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
TOMA; Tadamasa et al.
US 20250014256 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
HENDERSON; John C. et al.
US 20200026908 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
JEONG; Seung Hwan et al.
US 20220207262 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
Fanello; Sean Ryan Francesco et al.
US 20230360182 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
Chakrabarty; Saikat et al.
US 20220101577 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
WANG; Zhen et al.
US 20240338868 A1
“generate a set of embeddings based on visual features of the face, the set of embeddings representing the face of the user; {and} generate, by a text encoder, an input feature vector based on the set of embeddings generated based on the visual features of the face, which represent the face of the user, and a user input selecting the at least one predetermined style”
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALVARO R CALDERON IV whose telephone number is (571) 272-1818. The examiner can normally be reached on Monday - Friday (8:30am - 5pm).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached on (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALVARO R CALDERON IV/
Examiner, Art Unit 2171
/KIEU D VU/ Supervisory Patent Examiner, Art Unit 2171
1 Descriptive material does not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 218 USPQ 401, 403 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994). See also MPEP § 2111.05(III).
2 Descriptive material does not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 218 USPQ 401, 403 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994). See also MPEP § 2111.05(III).
3 Descriptive material does not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 218 USPQ 401, 403 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994). See also MPEP § 2111.05(III).