DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see pages 10-14, filed 18 February 2026, with respect to the rejection(s) of claim(s) 1 and similar claims in substance under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Jin et al. (US 2019/0122071 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 9, and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al. (US 2023/0140146 A1) in view of Jin et al. (US 2019/0122071 A1) in view of Borchetta et al. (US 2019/0197590 A1) in view of Makhal et al. (US 2021/0354926 A1) and further in view of Howard et al. (US 2025/0316000 A1).
Regarding claim 1, Ito discloses a method performed by a computing system, the method comprising: receiving a manually-generated avatar including at least a first head; (Paragraph 0020, generation of a first avatar face manually) receiving image data of a user from a camera; (Paragraph 0074, capturing an image of a user, such as a selfie, using a camera, paragraph 0107) generating, via an avatar machine-learning model, a machine-generated avatar of the user based at least on the image data, (Figures 1 and 2 and paragraphs 0012, 0016, 0023, and 0061, generating a second avatar having a head using face parameter values output by a machine learning system) wherein the avatar machine-learning model is trained on training data including a plurality of scans of human heads, (Paragraph 0071, training the machine learning system using user images having facial features, such as eyeglasses, hair, and facial hair) and wherein the machine-generated avatar of the user comprises a second head having facial features that map to actual facial features of the user (Paragraph 0024, facial features from user image are included in the second avatar). Ito does not clearly disclose training data including human heads obtained from a plurality of different human subjects that assume different head positions or facial expressions that represent a population of users. Jin discloses training data including images of different facial expressions corresponding to different emotions expressed by a variety of different people (Paragraph 0053). Jin’s training data including images of different facial expressions corresponding to different emotions expressed by a variety of different people would have been recognized by one of ordinary skill in the art to be applicable to the training of a machine learning system using user images including facial features of Jin and the results would have been predictable in training of a machine learning systems using user images of a variety of different people with different facial expressions and facial features. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Ito in view of Jin does not clearly disclose receiving a manually-generated avatar including at least a first head connected to a body. Borchetta discloses user customization of an avatar with certain appearances for a head and a body of the avatar (Figure 4 and paragraph 0079). Borchetta’s technique of generating an avatar with user customization of a head and a body of the avatar to have a certain appearance would have been recognized by one of ordinary skill in the art to be applicable to the generation of an avatar face manually of Ito in view of Jin and the results would have been predictable in the generation of an avatar with a head and body manually by a user with customizations for their appearance. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Ito in view of Jin and further in view of Borchetta does not clearly disclose training data including a plurality of three-dimensional scans of human heads. Makhal discloses training data that includes 2D and/or 3D images for training a machine learning model (Paragraph 0040). Makhal’s training data that includes 3D images for training a machine learning model would have been recognized by one of ordinary skill in the art to be applicable to the training of a machine learning system using images of users of Ito in view of Jin and further in view of Borchetta and the results would have been predictable in the training of a machine learning system using 3D images of users. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Ito in view of Jin in view of Borchetta and further in view of Makhal does not clearly disclose generating a composite avatar of the user by replacing the first head of the manually- generated avatar with the second head of the machine-generated avatar on the body of the manually-generated avatar; and displaying, via a display device, a graphical user interface including the composite avatar. Howard discloses replacing parts of an avatar in a multimodal scene graph to form a composite avatar using a combination of different parts from multiple avatars including a body and head (Paragraph 0099) where the multimodal scene graph can be used to render an environment for display (Paragraph 0073). Howard’s technique of replacing parts of an avatar using a combination of parts from multiple avatars to form a composite avatar for display would have been recognized by one of ordinary skill in the art to be applicable to the avatar that is manually generated with a head and body and the avatar having a head that is generated from a machine learning system of Ito in view of Jin in view of Borchetta and further in view of Makhal and the results would have been predictable in the replacement of a head of a manually generated avatar having a body with the head of an avatar generated by a machine learning system to form a composite avatar having a body that was manually generated and a head that was generated by a machine learning system for display. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 2, Ito in view of Borchetta in view of Makhal and further in view of Howard discloses wherein the manually-generated avatar is defined in terms of a first framework of parameters in a first parameter space (Borchetta, paragraph 0079, parameters related to the head and body of the first avatar), and wherein the machine-generated avatar is defined in terms of a second framework of parameters in a second parameter space (Ito, paragraph 0024, parameters related to facial features for the second avatar).
Regarding claim 9, similar reasoning as discussed in claim 1 is applied. Furthermore, Ito discloses a computing system, comprising: a display device (Paragraph 0039, display device); a logic subsystem; and a storge subsystem holding instructions executable by the logic subsystem (Paragraph 0113, processing system that executes software that can be stored in media).
Regarding claim 10, similar reasoning as discussed in claim 2 is applied.
Claim(s) 3 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al. (US 2023/0140146 A1) in view of Borchetta et al. (US 2019/0197590 A1) in view of Makhal et al. (US 2021/0354926 A1) and further in view of Howard et al. (US 2025/0316000 A1) and further in view of Moustafa et al. (US 2022/0392255 A1).
Regarding claim 3, Ito in view of Borchetta in view of Makhal and further in view of Howard discloses all limitations as discussed in claim 2. Ito in view of Borchetta in view of Makhal and further in view of Howard does not clearly disclose receiving video data that tracks movement of the user from the camera; translating, via a video-translation machine-learning model, the video data representing the movement of the user into corresponding parameter values of parameters in the second parameter space; and animating the second head of the composite avatar to mimic a head pose of the user and an expression of the user based at least on the parameter values of the parameters in the second parameter space output by the video-translation machine-learning model. Moustafa discloses generating a customized animation of an avatar based on video data of a user with face and hair information (Paragraph 0095). Moustafa’s technique of generating a customized animation of an avatar based on video data of a user with face and hair information would have been recognized by one of ordinary skill in the art to be applicable to the machine learning system generated avatar having hair parameters of Ito in view of Borchetta in view of Makhal and further in view of Howard and the results would have been predictable in the generation of a customized animation of an avatar generated by a machine learning system with hair parameters according to a video of a user having a face and hair. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 11, similar reasoning as discussed in claim 3 is applied.
Claim(s) 4 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al. (US 2023/0140146 A1) in view of Borchetta et al. (US 2019/0197590 A1) in view of Makhal et al. (US 2021/0354926 A1) and further in view of Howard et al. (US 2025/0316000 A1) and further in view of Khot et al. (US 2023/0164298 A1).
Regarding claim 4, Ito in view of Borchetta in view of Makhal and further in view of Howard discloses all limitations as discussed in claim 2. Ito in view of Borchetta in view of Makhal and further in view of Howard does not clearly disclose receiving audio data representing speech of the user from a microphone; translating, via an audio-translation machine-learning model, the audio data representing the speech of the user into corresponding parameter values of parameters in the second parameter space; and animating the second head of the composite avatar to mimic an expression of the user to produce the speech of the user based at least on the parameter values of the parameters in the second parameter space output by the audio-translation machine-learning model. Khot discloses a microphone recording speech of a user (Paragraph 0087) and using a machine learning model for speech animation of an animation based on audio data such as the recording of the user (Paragraph 0222) and the animation can produce movements for the avatar’s mouth as well as hair (Paragraph 0225). Khot’s technique for animating an avatar’s mouth and hair based on recorded speech of a user would have been recognized by one of ordinary skill in the art to be applicable to the machine generated avatar having hair parameters of Ito in view of Borchetta in view of Makhal and further in view of Howard and the results would have been predictable in the animation of a machine generated avatar with hair parameters according to recorded speech of a user. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 12, similar reasoning as discussed in claim 4 is applied.
Claim(s) 5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al. (US 2023/0140146 A1) in view of Borchetta et al. (US 2019/0197590 A1) in view of Makhal et al. (US 2021/0354926 A1) and further in view of Howard et al. (US 2025/0316000 A1) and further in view of Jeong et al. (US 2017/0316617 A1).
Regarding claim 5, Ito in view of Borchetta in view of Makhal and further in view of Howard discloses all limitations as discussed in claim 2. Ito in view of Borchetta in view of Makhal and further in view of Howard does not clearly disclose animating the body of the composite avatar to perform a pre-programmed movement based at least on parameter values of parameters in the first parameter space. Jeong discloses animating an avatar using pre-registered avatar motion (Paragraph 0049) where the animation incorporates the body shape of the avatar (Paragraph 0076).
Jeong’s technique of animating an avatar with a body shape using a pre-registered avatar motion would have been recognized by one of ordinary skill in the art to be applicable to the avatar having a head and body with parameters of Ito in view of Borchetta in view of Makhal and further in view of Howard and the results would have been predictable in the animation of an avatar having a head and body with a certain body shape using a pre-registered avatar motion. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 13, similar reasoning as discussed in claim 5 is applied.
Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al. (US 2023/0140146 A1) in view of Borchetta et al. (US 2019/0197590 A1) in view of Makhal et al. (US 2021/0354926 A1) and further in view of Howard et al. (US 2025/0316000 A1) and further in view of Chakrabarty et al. (US 2022/0101577 A1).
Regarding claim 8, Ito in view of Borchetta in view of Makhal and further in view of Howard discloses all limitations as discussed in claim 1. Ito in view of Borchetta in view of Makhal and further in view of Howard does not clearly disclose wherein the image data of the user comprises environmental lighting data, and wherein the method further comprises: shading the composite avatar based at least on the environmental lighting data. Chakrabarty discloses relighting a target image to modify the lighting to match a source image (Paragraph 0058). Chakrabarty’s technique of relighting a target image to modify the lighting to match a source image would have been recognized by one of ordinary skill in the art to be applicable to the display of a composite avatar generated from an image captured of a user of Ito in view of Borchetta in view of Makhal and further in view of Howard and the results would have been predictable in the relighting of an image of a composite avatar for display to match the lighting of an image captured of the user. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 16, similar reasoning as discussed in claim 8 is applied.
Allowable Subject Matter
Claims 6, 7, 14, and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 6, the prior art does not clearly disclose the method of claim 2, wherein the manually-generated avatar comprises a plurality of assets defining visual features on the first head, and wherein the method further comprises: deforming each asset of the plurality of assets based at least on the parameter values of the parameters in the second parameter space that define the second head of the machine-generated avatar to fit the asset to the second head of the machine- generated avatar, and attaching the plurality of deformed assets to the second head of the composite avatar.
Regarding claim 14, similar reasoning as discussed in claim 8 is applied.
Claims 17-20 are allowed.
The following is an examiner’s statement of reasons for allowance: the prior art does not clearly disclose the limitations of claims 17-20.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Vahdat et al. (US 2022/0101145 A1) discloses training data including visual attributes of users such as facial features, gender, facial expressions, etc.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHI HOANG whose telephone number is (571)270-3417. The examiner can normally be reached Mon-Fri 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON CHAN can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHI HOANG/Primary Examiner, Art Unit 2619