DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's election without traverse of Species 3 with generic claims 10-13 in the reply filed on 2nd March, 2026 is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kanaujia et al. (US 20130250050 A1), hereinafter referenced as Kanaujia, in view of E et al. (US 20240221287 A1), hereinafter referenced as E.
Regarding Claim 10, Kanaujia discloses a computer-implemented method of generating wearable assets for avatars (Kanaujia, [0091]: teaches a method for generating detailed 3D human models <read on avatars>; [0124]: teaches processing the shape and size of a carried accessory <read on wearable asset>), the method comprising:
receiving, by a computing system comprising one or more processors, a wearable asset associated with an avatar (Kanaujia, [0086]: teaches "module 108 may use the estimated pose and shape of the human object <read on avatar> to automatically identify disproportionate body parts, detecting accessories <read on wearable asset> (e.g., a backpack, suitcase, purse, etc.), the size of the detected accessories, and/or to infer attributes of the human object, such as gender, age and ethnicity"; [0080]: teaches modules 103-108 of the system, as well as their components, are implemented with hardware circuitry, such as one or more processors);
receiving, by the computing system, a mesh model of the avatar (Kanaujia, [0091]: teaches estimating a final pose and shape of a deformable 3D human model <read on mesh model>), wherein
the mesh model of the avatar is associated with a hierarchical skeleton comprising a plurality of skeletal segments and a plurality of medial volumes (Kanaujia, [0107]: teaches "a course 3D human shape model 320 comprised of a plurality of cylindrical body parts 322a <read on medial volumes> individually mapped to align with segments <read on skeletal segments> of a skeleton 324a" as shown in FIG. 3B; [0115]: teaches sampling angular priors of the joints in a skeletal hierarchy (such as shoulder and femur skeletal joints) of the 3D human shape model);
PNG
media_image1.png
233
384
media_image1.png
Greyscale
determining, by the computing system, a plurality of skin deformations of the mesh model at a plurality of positions of the plurality of skeletal segments (Kanaujia, [0116]: teaches the 3D mesh surface undergoing deformation <read on skin deformation> only under the influence of the skeleton attached to it, where "the shape deformation due to pose may be obtained by first skinning the 3D mesh to the skeleton and transforming the vertices under the influence of associated skeletal joints <read on positions of skeletal segments>," which are associated to different body segments <read on plurality of positions>; Note: it should be noted that the shape deformation is being interpreted as being applied to each skeletal segment <read on plurality of skin deformations>); and
generating, by the computing system, based on the plurality of skin deformations of the mesh model of the avatar, [[a deformable mesh model of]] the wearable asset (Kanaujia, [0117]: teaches utilizing Linear Blend Skinning (LBS) for efficient non-rigid deformation of skin to deform the 3D mesh under the influence of the underlying skeleton, where "shape deformation may also be achieved with a human-accessory combination model, as shown with the model 504 on the right including backpack accessory 508 <read on wearable asset> attached to the torso").
However, Kanaujia does not expressly disclose
generating, by the computing system, based on the plurality of skin deformations of the mesh model of the avatar, a deformable mesh model of the wearable asset.
E discloses
generating, by the computing system, based on the plurality of skin deformations of the mesh model of the avatar, a deformable mesh model of the wearable asset (E, [0085]: teaches "the target position of each mesh vertex of the virtual clothing <read on wearable asset> includes a position of each mesh vertex of the virtual clothing after the virtual clothing is deformed <read on generating deformable mesh model> based on the posture information").
E is analogous art with respect to Kanaujia because they are from the same field of endeavor, namely deforming 3D meshes for humanoid figures. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to deform a virtual clothing model based on a target object's posture (i.e., the pose of a 3D humanoid mesh) as taught by E into the teaching of Kanaujia. The suggestion for doing so would allow for a more accurate alignment result due to the acquisition of target positions of each mesh vertex of the virtual clothing, thereby improving the generated output. Therefore, it would have been obvious to combine E with Kanaujia.
Regarding Claim 11, the combination of Kanaujia and E discloses the computer-implemented method of Claim 10. Additionally, Kanaujia further discloses wherein the generating, by the computing system, based on the plurality of skin deformations of the mesh model of the avatar, a deformable mesh model of the wearable asset comprises:
generating, by the computing system, the mesh model of the avatar based on inputting the wearable asset and the plurality of skin deformations of the avatar into one or more machine-learning models that are configured to generate [[the deformable mesh model of]] the wearable asset (Kanaujia, [0099]: teaches the system <read on machine-learning models> analyzing input images/video and identifying an anomalous shape of the target as an accessory <read on wearable asset being an input> (i.e., a backpack), where the accessory is "removed from estimations in creating the coarse 3D human models and creating the detailed 3D human models <read on mesh model of avatar>" which can then be combined with the removed accessory to obtain a human-accessory combination model as shown in FIG. 5; Note: it should be noted that the system uses pose prediction and pose refinement to generate a 3D humanoid mesh, which is a training process for a neural network; [0116]: teaches the 3D mesh surface undergoing deformation <read on skin deformation> only under the influence of the skeleton attached to it, where "the shape deformation due to pose may be obtained by first skinning the 3D mesh to the skeleton and transforming the vertices under the influence of associated skeletal joints," which are associated to different body segments; Note: it should be noted that the shape deformation is being interpreted as being applied to each skeletal segment <read on plurality of skin deformations>; [0117]: teaches performing LBS to achieve shape deformation using a human-accessory combination model that includes an accessory <read on wearable asset>).
PNG
media_image2.png
364
472
media_image2.png
Greyscale
However, Kanaujia does not expressly disclose
generating, by the computing system, the mesh model of the avatar based on inputting the wearable asset and the plurality of skin deformations of the avatar into one or more machine-learning models that are configured to generate the deformable mesh model of the wearable asset.
E discloses
generating, by the computing system, the mesh model of the avatar based on inputting the wearable asset and the plurality of skin deformations of the avatar into one or more machine-learning models that are configured to generate the deformable mesh model of the wearable asset (E, [0085]: teaches the apparatus acquiring the target position of each mesh vertex of the virtual clothing <read on wearable asset>, where "the target position of each mesh vertex of the virtual clothing includes a position of each mesh vertex of the virtual clothing after the virtual clothing is deformed <read on generating deformable mesh model> based on the posture information").
E is analogous art with respect to Kanaujia because they are from the same field of endeavor, namely deforming 3D meshes for humanoid figures. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to deform a virtual clothing model based on a target object's posture (i.e., the pose of a 3D humanoid mesh) as taught by E into the teaching of Kanaujia. The suggestion for doing so would allow for a more accurate alignment result due to the acquisition of target positions of each mesh vertex of the virtual clothing, thereby improving the generated output. Therefore, it would have been obvious to combine E with Kanaujia.
Regarding Claim 12, the combination of Kanaujia and E discloses the computer-implemented method of Claim 10. Additionally, Kanaujia further discloses wherein the determining, by the computing system, a plurality of skin deformations of the mesh model at a plurality of positions of the plurality of skeletal segments comprises:
determining, by the computing system, the plurality of skin deformations based on inputting the mesh model of the avatar at the plurality of positions into one or more machine-learning models that are configured to determine the plurality of skin deformations (Kanaujia, [0099]: teaches the system <read on machine-learning models> analyzing input images/video and identifying an anomalous shape of the target as an accessory (i.e., a backpack), where the accessory is "removed from estimations <read on determined skin deformations> in creating the coarse 3D human models and creating the detailed 3D human models"; [0116]: teaches the 3D mesh surface undergoing deformation <read on skin deformation> only under the influence of the skeleton attached to it, where "the shape deformation due to pose may be obtained by first skinning the 3D mesh to the skeleton and transforming the vertices under the influence of associated skeletal joints," which are associated to different body segments <read on plurality of positions>; Note: it should be noted that the shape deformation is being interpreted as being applied to each skeletal segment <read on plurality of skin deformations>).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kanaujia et al. (US 20130250050 A1), hereinafter referenced as Kanaujia, in view of E et al. (US 20240221287 A1), hereinafter referenced as E as applied to Claim 10 above respectively and further in view of Villegas et al. (US 20220020199 A1), hereinafter referenced as Villegas.
Regarding Claim 13, the combination of Kanaujia and E discloses the computer-implemented method of Claim 10. The combination of Kanaujia and E does not expressly disclose the limitations of Claim 13; however, Villegas discloses wherein the plurality of positions of the plurality of skeletal segments are based on
one or more range of motion parameters of the hierarchical skeleton (Villegas, [0041]: teaches performing motion retargeting with kinematic constraints <read on range of motion parameters> using a digital skeleton <read on hierarchical skeleton> that includes multiple joints 204 (204a-204g) that connects different structural members (e.g., limbs) of the digital skeleton; Note: it should be noted that kinematic constraints are mathematical, geometric, and/or physical restrictions placed on the motion of rigid bodies, which clamps their degrees of freedom in motion).
Villegas is analogous art with respect to Kanaujia, in view of E because they are from the same field of endeavor, namely deforming 3D humanoid models. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement kinematic constraints for each skeletal joint of a 3D humanoid model as taught by Villegas into the teaching of Kanaujia, in view of E. The suggestion for doing so would restrict the movement of each limb, thereby allowing for more natural movement and yielding predictable results. Therefore, it would have been obvious to combine Villegas with Kanaujia, in view of E.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Black et al. (US 20100111370 A1) discloses estimating the body shape of an individual to generate a deformable 3D model with clothes;
Grant et al. (US 20220044490 A1) discloses generating an avatar that wears multiple layers of clothing by deforming 3D models of clothes;
Kwai (US 20200013232 A1) discloses converting a 3D object to an avatar using deformable templates;
Makeev et al. (US 20230120883 A1) discloses generating a practical 3D asset avatar model;
Tong et al. (US 20130201187 A1) discloses generating a 3D face based on multi-view images;
Uyyala et al. (US 20180240244 A1) discloses generating high-fidelity 3D dynamic scenes using voxels and facial feature data;
Wu et al. (US 20220237879 A1) discloses training a real-time, direct clothing modeling neural network for animating an avatar for a subject;
Zhang et al. (US 20220319055 A1) discloses a neural human performance capture framework (MVS-PERF) capturing a skeleton, body shape, and clothes displacement, and the appearance of a person from multiview images;
Zheng et al. (US 20250245949 A1) discloses generating a virtual avatar using machine learning models; and
Liang et al. (US 11869163 B1) discloses machine learning-based rendering of clothed humans with a realistic appearance.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 10:30 AM - 7:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.D.T./Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614