DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This action is responsive to communications: Request for Reconsideration, filed on 12/10/2026. This action is made FINAL.
2. Claims 1, 3-21 are pending in the case. Claims 1 and 20 are independent claims. Claims 1 and 20 have been amended. Claim 2 is cancelled.
Response to Arguments
Applicant's arguments filed December 10, 2026 have been fully considered but they are not persuasive.
II. Note to Examiner
Applicant indicates the cited U.S. Patent Application Publication No. 2022/00377257 is invalid.
In response, the cited U.S. Patent Application Publication was previously provided in the Notice of References cited in Non-Final office action mailed March 24, 2025. Thus, Applicant had prior knowledge and access to the cited U.S. Patent Application Publication. Therefore, a new grounds of rejection is not presented.
III. Double Patenting RejectionApplicant request the rejection be held in abeyance until substantive issues have been resolved.
Applicant’s request is acknowledged and the rejection is maintained.
IV. Rejection of Claims 1, 3, 11 and 20 Under 35 U.S.C. § 102 (a)(1)
Applicant argues (claim 1) Milman fails to disclose
Obtaining a fictional character via text input.
Styling the fictional character via text input.
In response, Milman discloses (col. 5-6, ll. 60-6) an application 116 enables users to interact with the client device 104 to generate parameterized avatars in connection with respective functionality of the application 116. Some examples of functionality that a suitably configured application can provide for a user of the client device 104 include social networking, gaming, multiplayer gaming, online shopping, content creation and/or editing, communication enhancing (e.g., an application for use in connection with composing text messages, instant messages, emails, etc.), and so forth. Milman further discloses (col. 13, ll. 10-17) an avatar generation module/application provides a user interface where the user is prompted to provide some form of input. Milman additionally discloses (col. 14, ll. 46-52) FIG. 4 is depicted utilizing touch input to leverage the described functionality, although other types of inputs may also or alternatively be used to leverage the described functionality, including stylus input, keyboard and/or mouse input, voice commands, gaze-based input, gesture input, and so forth, where keyboard and gesture input are exemplary of providing textual input. Thus, Milman discloses obtaining a fictional character via text input; and styling the fictional character via text input.
Applicant argues claims 3 and 11 are patentable for at least the reasons that claim 1 is patentable.
In response, claims 3 and 11 are not patentable based on at least the rejection from a rejected base claim.
V. Rejection of Claim 8 and 10 Under 35 U.S.C. § 103
Applicant traverses the rejection of claims 8 and 10 as Neckermann fails to cure deficiencies of Milman as applied to independent claims 1 and 20.
In response, claims 8 and 10 are not patentable based on at least the rejection from a rejected base claim.
VI. Rejection of Claims 12 and 14 Under 35 U.S.C. § 103
Applicant traverses the rejection of claims 12 and 14 as Wilson fails to cure deficiencies of Milman as applied to independent claims 1 and 20.
In response, claims 12 and 14 are not patentable based on at least the rejection from a rejected base claim.
To the extent that the response to the applicant's arguments may have mentioned new portions of the prior art references which were not used in the prior office action, this does not constitute a new ground of rejection. It is clear that the prior art reference is of record and has been considered entirely by applicant. See In re Boyer, 363 F.2d 455, 458 n.2, 150 USPQ 441, 444, n.2 (CCPA 1966) and In re Bush, 296 F.2d 491, 496, 131 USPQ 263, 267 (CCPA 1961).
The mere fact that additional portions of the same reference may have been mentioned or relied upon does not constitute new ground of rejection. In re Meinhardt, 392, F.2d 273, 280, 157 USPQ 270, 275 (CCPA 1968).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-79-11,1 3, 15 and 17-20 of U.S. Patent No. 11,967,000. Although the claims at issue are not identical, they are not patentably distinct from each other because each provide generating emoticons based on select fictional characters features and styling as determined by a neural network.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to conclude that the invention defined in the claims at issue would have been an obvious variation of the invention defined in a claim in the paten because the patent provides a plurality of discriminators that evaluate the relevance of the features to generate emoticons based on select features and styling.
The following table shows the claims of the current application being examined and the conflicting claims of Patent 11,967,000.
Application 18/617,050 Patent: 11,967,000
Claims: 1, 3-5
Claim: 1
Claim: 6
Claim: 5
Claim: 7
Claim: 7
Claim: 8
Claim: 3
Claim: 9
Claim: 4
Claim: 10
Claim: 7
Claim: 11
Claim: 9
Claim: 12
Claim: 10, 13
Claim: 13
Claim: 10
Claims: 14-15
Claim: 11
Claim: 16
Claim: 15
Claim: 17
Claim: 17
Claim: 18
Claim: 18
Claim: 19
Claim: 19
Claim: 20
Claim: 20
The following table shows an example of the corresponding conflicting claims of the current application and Patent 11,967,000.
Application: 18/617,050 Patent: 11,967,000
An apparatus for generating one or more emoticons for a user with respect to one or more fictional characters, the apparatus comprising: a neural network configured to: receive a first image based on a set of features from multiple sets of features associated with the one or more fictional characters
An apparatus for generating one or more emoticons for one or more users with respect to one or more fictional characters, the apparatus comprising: a plurality of discriminators configured to: receive a first image generated by a multiple localized discriminator (MLD) generative adversarial network (GAN) based on a set of features from multiple sets of features associated with the one or more fictional characters resulting in generation of an output value associated with each of the plurality of discriminators, and determine a weight associated with each of the plurality of discriminators based on a distance between each discriminator and the set of features
a processor configured to: generate a plurality of images representing one or more emoticons associated with the one or more fictional characters based on each of the multiple sets of features, generate the one or more emoticons by styling at least one image of the user with respect to one or more images selected from the plurality of images and one or more user inputs
at least one processor configured to generate, using a pre-trained info graph, an image info-graph associated with the first image generated by the MLD GAN upon receiving the first image; calculate a relevance associated with each of the plurality of discriminators based on the image info-graph, the set of features and the distance; and the MLD GAN configured to: generate a plurality of images representing a plurality of emoticons associated with the one or more fictional characters based on each of the multiple sets of features, and generate the one or more emoticons by styling one or more user images with respect to one or more images selected from the plurality of images, and one or more user inputs
and a memory that stores the generated one or more emoticons and the plurality of images
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1 and 3- 21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1 recites: “obtain, in response to a first textual input by the user, a first image, corresponding to the one fictional character”
Applicant’s Specification has not been found to support the above indicated limitation. Applicant’s Specification (Para 66) discloses the one or more fictional characters may be based one or more of a story, a conversation, a textual input, and a voice input. Applicant’s Specification (Para 116) further discloses “Referring to FIG. 5, in an embodiment of the disclosure, the info-graph generator 216 may be configured to create a word embedding for a number of physical and personality traits for the characters and contacts. In an embodiment of the disclosure, the number of physical and personality traits may be obtained based on one or more of a textual data, an audio data and a visual data extracted from a database”. Thus, correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 3, 11 and 20-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Rebecca Milman et al., US 10,607,065 B2.
Independent claim 1, Milman discloses an apparatus for generating one or more emoticons for a user with respect to one fictional character, the apparatus comprising:
A processor communicatively coupled to memory (Fig. 8 “804”);
wherein the memory including instructions (Fig. 1 “116”) implementing a neural network (i.e. Generative adversarial network, e.g. GAN – col. 11, ll. 39-41) executed by the processor (Fig. 8 “802”; col. 20, ll. 47-54) cause the apparatus to:
obtain, in response to a first textual input by the user, a first image, corresponding to the one fictional character (i.e. the application 116 enables users to interact with the client device 104 to generate parameterized avatars in connection with respective functionality of the application 116. Some examples of functionality that a suitably configured application can provide for a user of the client device 104 include social networking, gaming, multiplayer gaming, online shopping, content creation and/or editing, communication enhancing (e.g., an application for use in connection with composing text messages, instant messages, emails, etc.), and so forth – col. 5-6, ll. 60-6; an avatar generation module/application provides a user interface where the user is prompted to provide some form of input – col. 13, ll. 10-17; It is to be appreciated that although FIG. 4 is depicted utilizing touch input to leverage the described functionality, other types of inputs may also or alternatively be used to leverage the described functionality, including stylus input, keyboard and/or mouse input, voice commands, gaze-based input, gesture input, and so forth – col. 14, ll. 46-52; keyboard and gesture input are interpreted as textual input);
generate one or more of images representing one or more emoticons, each of which is generated from multiple sets of features extracted from the one fictional character (i.e. each of the first and second style machine-learning models 202, 204 is configured to generate parameterized avatars, e.g. emoticons, in a respective style, e.g., a first style and a second style, respectively – col 8, ll. 63-67; a generative adversarial network (GAN), e.g. machine learning model, provides a library of digital photographs paired with digital cartoon images based on corresponding features - col. 3, ll. 28-32; create differing cartoon versions - col. 14- 15, ll. 51-15 -, which are selectively available to apply to the user image to create an emoticon - Fig. 4 “424”), and
generate the one or more emoticons by styling the first image with respect to the one or more images with one or more textual user inputs (i.e. select images and provide input via user interfaces - Fig. 4 “408, 410”; using touch, gesture input and a keyboard – col. 14, ll. 46-52).
Claim 3, Milman discloses the apparatus of claim 1, wherein the first image is generated by a multiple localized discriminator generative adversarial network (MLD GAN) based on the set of features from the multiple sets of features associated with one or more fictional characters (i.e. the described systems can leverage trained machine-learning models to generate parameterized avatars of different styles – col. 6, ll. 47-50; the trained machine-learning model uses a generative adversarial network (GAN) – col. 3, ll. 28-30) and the MLD GAN comprises the neural network (i.e. the machine-learning model may be configured as a neural network – col. 3, ll. 17-18), and wherein the neural network comprises a plurality of discriminators (i.e. the described systems can leverage trained machine-learning models – col. 6, ll. 47-50) and the plurality of discriminators is configured to receive the first image generated by the MLD GAN (i.e. a parameterized avatar is generated by a machine-learning model trained based on the previous cartoon style – col. 6, ll. 64-65).
Claim 11, Milman discloses the apparatus of claim 1, wherein the processor is further configured to: generate a fictional character info-graph associated with the one or more fictional characters based on a plurality of features associated with the one or more fictional characters (i.e. a library of cartoon features – abstract), generate a user info-graph associated with one or more users including the user based on a plurality of attributes associated with the one or more users (i.e. generates a condensed parameter vector indicative of features of the depicted person's face – col. 3, ll. 23-25), and map the one or more fictional characters to the one or more users based on the set of features and the plurality of attributes (i.e. determine correspondences of avatar cartoon features (e.g., noses, eyes, mouths, face shapes, hair, and so on) with the respective features of persons in the photorealistic digital images – col. 7, ll. 7-11; the image-to-style network 304 represents a learned mapping between digital photographs and corresponding images of the particular style – col. 11, ll. 35-38).
Independent claim 20, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Claim 21, Milman discloses the apparatus of claim 1, wherein the one or more user inputs comprise at least one of a plurality of facial features, a facial tone, a hair color, a body built, a dressing style, and one or more accessories worn by a user (i.e. displaying a portion of a user interface that allows a user of the client device 104 to select one of multiple different avatar styles – col 7, ll. 53-56; provide style selection input – Fig. 2 “208”; a user interface is presented having instrumentalities that allow a user to select one of the two styles, e.g., a first instrumentality for selecting the first style and a second instrumentality for selecting the second style – col. 8-9, ll. 67-5; selected style models generate noses, eyes, etc. - col. 9, ll. 35-45; user interfaces include an “exploded view” user interface, via which features (e.g., eyes, mouth, nose, hair, facial structure, body type, and so on) are individually selectable; adjusting hair or eye color – col. 15, ll. 31-38).
.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 8 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rebecca Milman et al., US 10,607,065 B2 as applied to claim 1 above, and further in view of Tom Neckermann et al., US 2022/0377257 A1.
Claim 8, Milman discloses the apparatus of claim 3.
Milman fails to disclose wherein the processor is further configured to: calculate a confidence score associated with the plurality of discriminators based on a plurality of parameters associated with each of the plurality of discriminators, and select remaining sets of features after generation of the first image based on the confidence score associated with the plurality of discriminators, which Neckermann discloses (i.e. persona style discriminator makes predictions indicating the degree of confidence; and modifies parameters of the model – Para 79).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to combine Neckermann’s processor configured to: calculate a confidence score associated with the plurality of discriminators based on a plurality of parameters associated with each of the plurality of discriminators, and select remaining sets of features after generation of the first image based on the confidence score associated with the plurality of discriminators with the method of Milman because performing multiple passes on a discriminator predicts a degree of confidence of an image indicating an image likeness to a desired image and enables determination as to whether additional image adjustments are needed to provide the benefit of improved generation of an image of a desired likeness or customized style.
Claim 10, Milman discloses the apparatus of claim 1, wherein, a photorealistic image of person is converted into an image depicting a cartoonized avatar (col. 1, ll. 26-30).
Milman fails to disclose in generating the one or more emoticons by styling the one or more user images, the processor is further configured to: super-impose the one or more images generated from the set of features associated with the one or more fictional characters onto the at least one image of the user, and generate the one or more emoticons by applying the one or more user inputs to the at least one image of the user, which Neckermann discloses (Fig. 5A, 5C, 5D).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to combine Neckermann’s processor configured to: super-impose the one or more images generated from the set of features associated with the one or more fictional characters onto the at least one image of the user, and generate the one or more emoticons by applying the one or more user inputs to the at least one image of the user with the method of Milman because transfer of a personalized style to a user image may change at least a portion of the user image and provide the advantage of providing an image that expresses customized personal style.
Claim(s) 12 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rebecca Milman et al., US 10,607,065 B2 as applied to claim 1 above, and further in view of Kimiko Wilson et al., US 2022/0068296 A1.
Claim 12, Milman discloses the apparatus of claim 11.
Milman fails to disclose, wherein, in generating the fictional character info-graph, the processor is further configured to: extract content data including textual data, audio data, and visual data from a content database, analyze one or more conversations and one or more dialogues between the one or more fictional characters, and determine the plurality of features by analyzing the one or more conversations and the one or more dialogue exchanges, which Wilson discloses (i.e. using identified information as input to an image generator model, e.g. a trained GAN, - Para 21-23; Fig. 1 “156”; extracted information includes input sound, emotion, text, and scenery – Para 10, 16, 34 – that are analyzed with natural language processing of conversation and messages – Para 26 - to determine and generate image representation including an avatar/character – Para 10).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to combine Wilson’s apparatus wherein, in generating the fictional character info-graph, the processor is further configured to: extract content data including textual data, audio data, and visual data from a content database, analyze one or more conversations and one or more dialogues between the one or more fictional characters, and determine the plurality of features by analyzing the one or more conversations and the one or more dialogue exchanges with the apparatus of Milman because identifying and analyzing extracted information from conversation enables generation of image representations containing appropriate sentiment and provides the advantage of aiding communication.
Claim 14, Milman dsicloses the apparatus of claim 11.
Milman fails to disclose wherein, in generating the user info-graph, the processor is further configured to: analyze one or more conversations and one or more dialogue exchanges between the one or more users and one or more social media activities of the one or more users, which Wilson discloses (i.e. using identified information as input to an image generator model, e.g. a trained GAN, - Para 21-23; Fig. 1 “156”; extracted information includes input sound, emotion, text, and scenery – Para 10, 16 – that are analyzed with natural language processing of conversation and social media messages – Para 26 - to determine and generate image representation including an avatar/character – Para 10; retrieve social network information from a chat client – Para 17).
Similar rationale as applied in the rejection of claim 12 applies herein.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANTE HARRISON whose telephone number is (571)272-7659. The examiner can normally be reached Monday - Friday 8:00 am to 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHANTE E HARRISON/Primary Examiner, Art Unit 2615