Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-6, 8-20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US Pub 2022/0292773 A1) in view of Chen, Shu-Yu, et al. ("Rigidity controllable as-rigid-as-possible shape deformation." Graphical Models 91 (2017): 13-21.).
As to claim 1, Liu discloses a computer-implemented method to stylize a three-dimensional (3D) avatar body (Liu, abstract. ¶0269, “automatic face generation systems for various games for both real-style and cartoon-style games”), the method comprising:
performing at least one from the group comprising:
applying a scaling control to a body part of the avatar body to change a size of at least one component of the body part (Liu, ¶0236, “the standard face model in the game based on the prediction of the keypoints of the real face needs to be adjusted. The process needs to ensure that the keypoints of the standard face model in the game and the real face are aligned in terms of scale, position, and direction. Therefore, normalization 2906 of the predicted keypoints and the keypoints on the game face model, includes the following parts: normalization of scale, normalization of translation, and normalization of angle.”);
applying a positional control to the body part of the avatar body to change at least one from the group of a shape of the at least one component of the body part,
a position of the at least one component of the body part, and a combination thereof (Liu, ¶0019, “constructing a facial position map from a 2D facial image of a real-life person further includes reconstructing a three-dimensional (3D) facial model of the real-life person based on the updated facial position map.” ¶0093-¶0102.);
applying an orientation control to the body part of the avatar body to change an orientation of the at least one component of the body part; and combinations thereof (Liu, ¶0238, “after normalizing the scale and translation, the face direction is further normalized. As shown in the image 2902 of FIG. 29, the face in the actual photo may not face the lens directly, and there will always be a certain deflection, which may exist on the three coordinate axes. The predicted three-dimensional keypoints of the face along the x, y, and z coordinate axes are sequentially rotated so that the direction of the face is facing the camera. When rotating along x, the z coordinates of key points 18 and 24 (referring to the definition of keypoints in FIG. 1) are aligned, that is, let the depth of the uppermost part of the bridge of the nose be at the same depth as the bottom of the nose, to obtain the rotation matrix R.sub.X. When rotating along the y axis, the z coordinates of keypoints 1 and 17 are aligned to get the rotation matrix R.sub.Y. When rotating along the z axis, the y coordinates of key point 1 and 17 are aligned to get the rotation matrix R.sub.Z. Thus the direction of the keypoints are aligned and the normalized keypoints are shown as below” ¶0266, “normalizing the set of real-life keypoints into a canonical space includes: scaling the set of real-life keypoints into the canonical space; and rotating the scaled set of real-life keypoints according to the orientations of the set of real-life keypoints in the 2D facial image”); and
deforming the body part of the avatar body based on minimizing an energy function associated with at least one from the group comprising: the scaling control, the positional control, the orientation control, and combinations thereof (Liu, ¶0007, “minimizers of the so-called “Laplacian energy””. “¶0008, “The nature of energy minimization is the smoothing of the mesh. If directly applying the aforementioned minimizer, all the detailed features will be smoothed out. Besides, when the keypoints' positions stay unchanged, the deformed mesh is expected to be exactly the same as the original mesh. Out of these considerations, a preferred usage of biharmonic deformation is to solve the vertices' displacement other than their positions. In this way the deformed positions can be written as x′=x+d, where d is the displacement of the unknown vertices in each dimension.” ¶0021, “performing deformation to the mesh of the 3D head template model to obtain a deformed 3D head mesh model by reducing the differences between the first set of keypoints and the second set of keypoints; and applying a blendshape method to the deformed 3D head mesh model to obtain a personalized head model according to the 2D facial image.”).
Liu teaches every limitation of the claim. However, for compact prosecution, examiner provides Chen for more details of deforming the body part of the avatar body based on minimizing an energy function.
Chen further teaches more details of deforming the body part of the avatar body based on minimizing an energy function (Chen, Page 13, “The difference between the Laplacian coordinates of the deformed and the original shapes is minimized to keep the local geometric details. However, both the positional and rotational constraints for the deformation handles are required for these methods to work. As shown in [9], positional and rotational constraints need to be assigned compatibly to avoid artifacts. This is non-trivial and requires additional effort/expertise from the user.” “This principle is modeled as an as-rigid-as-possible (ARAP) energy which has been widely used in geometric processing. Based on this energy, Sorkine et al. [10] present a mesh deformation method.” Page 16, “the energy of our deformation approach is monotonically decreasing so it always converges to some local minima.”).
Liu and Chen are considered to be analogous art because all pertain to 3D deformation. It would have been obvious before the effective filing date of the claimed invention to have modified Liu with the features of “deforming the body part of the avatar body based on minimizing an energy function” as taught by Chen. The suggestion/motivation would have been in order to preserve the geometric details while distributing the necessary distortions uniformly (Chen, abstract).
As to claim 2, claim 1 is incorporated and the combination of Liu and Chen discloses minimizing the energy function comprises using a local-global solver to perform a local step and a global step (Chen, Page 15, 3.2.1, 3.2.2, “global/local step”), wherein the local step comprises computing an optimal rotation by solving a singular value decomposition and the global step comprises solving a system of linear equations (Chen, Page 15, “Similar to [10], the optimal rotation can be obtained explicitly. We first apply singular value decomposition (SVD) to S i , giving S i = U i i V T i . The optimized rigid rotation R i can be obtained as V i U i T . The sign of U i corresponding to the smallest singular value should be changed when necessary to make det R i > 0 . The rigid rotation optimization is independent for each r -ring neighborhood, so this optimization can be straight- forwardly accelerated in parallel by OpenMP.” Page 15, “optimal position for pi can thus be obtained by solving the linear system”).
As to claim 3, claim 2 is incorporated and the combination of Liu and Chen discloses applying the positional control comprises satisfying at least one positional constraint by adding positional constraints when performing the global step (Chen, Page 15, “Given the optimized rigid rotations R i , the r -ring ARAP energy becomes a quadratic function w.r.t. the deformed positions. The optimal position for pi can thus be obtained by solving the linear system” “For vertex i ∈ H , the specified handle position is ci . This is equivalent to having a hard constraint pi = c i . For each i∈ H , the corresponding i th row and i th column of A will be set to zero except for the diagonal element where A (i, i ) = 1 . The i th row of b is set to ci .”).
As to claim 4, claim 1 is incorporated and the combination of Liu and Chen discloses applying the scaling control comprises specifying a target scale at respective vertices of a mesh of the body part of the avatar body (Liu, ¶0021, “mapping the first set of keypoints to a second set of keypoints based on a set of user-provided keypoint annotations located on a plurality of vertices of a mesh of a 3D head template model; performing deformation to the mesh of the 3D head template model to obtain a deformed 3D head mesh model by reducing the differences between the first set of keypoints and the second set of keypoints; and applying a blendshape method to the deformed 3D head mesh model to obtain a personalized head model according to the 2D facial image.” ¶0236, “The process needs to ensure that the keypoints of the standard face model in the game and the real face are aligned in terms of scale, position, and direction. Therefore, normalization 2906 of the predicted keypoints and the keypoints on the game face model, includes the following parts: normalization of scale, normalization of translation, and normalization of angle.” ¶0237, “For the scale, the distance between the 1st and 17th keypoints from the origin is adjusted to 1, so that the three-dimensional keypoint normalized by scale and translation is p′=(p−c)/∥p.sub.1−c∥.”).
As to claim 5, claim 1 is incorporated and the combination of Liu and Chen discloses applying the positional control comprises manipulating at least one of the shape and the position of the at least one component of the body part to satisfy at least one positional constraint (Liu, ¶0068, “Facial keypoints: pre-defined landmarks that determine shapes of certain facial parts, e.g., corners of eyes, chins, nose tips, and corners of mouth.” ¶0194, “The polygon mesh can be easily converted into pure triangle mesh, which is called the template model. For each template model, 3D keypoints are marked on the template model once by hand. After that, it can be used for deforming into a characteristic head avatar according to the detected and reconstructed 3D keypoints from an arbitrary human face picture.” ¶0201, “The constraints of keypoints' positions: E.sub.k=Σ.sub.i=1∥v′.sub.i−c′.sub.i∥.sup.2, c′.sub.i stands for the detected keypoints positions after mesh deformation.”).
As to claim 6, claim 1 is incorporated and the combination of Liu and Chen discloses applying the orientation control comprises controlling surface normals of the at least one component to satisfy at least one orientation constraint (Liu, ¶0266, “normalizing the set of real-life keypoints into a canonical space includes: scaling the set of real-life keypoints into the canonical space; and rotating the scaled set of real-life keypoints according to the orientations of the set of real-life keypoints in the 2D facial image.”).
As to claim 8, claim 1 is incorporated and the combination of Liu and Chen discloses the body part of the avatar body comprises a head (Liu, abstract, “generating a three-dimensional (3D) head deformation model”)
As to claim 9, claim 1 is incorporated and the combination of Liu and Chen discloses the at least one component of the body part includes one or more from the group of: a jaw, lips, eyes, ears, a forehead, eyebrows, eyelashes, cheeks, a chin, a nose, a face, hair, and combinations thereof (Liu, ¶0068, “Facial keypoints: pre-defined landmarks that determine shapes of certain facial parts, e.g., corners of eyes, chins, nose tips, and corners of mouth.” ¶0104, “four different classification tasks (heads) are implemented for female hair prediction” ¶0105-0120. ¶0207, “when deforming a head model of a game avatar, the region of interests usually is only the face.” ¶0268, “adjusting the symmetrized set of real-life keypoints according to the predefined style associated with the avatar in the game includes one or more of the face length adjustment, face width adjustment, facial feature adjustment, zoom adjustment, and eye shape adjustment.”).
As to claim 10, the combination of Liu and Chen discloses a non-transitory computer-readable medium with instructions stored thereon that, responsive to execution by a processing device, cause the processing device to perform operations to stylize a three-dimensional (3D) avatar body comprising: performing at least one from the group comprising: applying a scaling control to a body part of the avatar body to change a size of at least one component of the body part; applying a positional control to the body part of the avatar body to change at least one from the group of a shape of the at least one component of the body part, a position of the at least one component of the body part, and a combination thereof; applying an orientation control to the body part of the avatar body to change an orientation of the at least one component of the body part; and combinations thereof; and deforming the body part of the avatar body based on minimizing an energy function associated with at least one from the group comprising: the scaling control, the positional control, the orientation control, and combinations thereof (See claim 1 for detailed analysis.).
As to claim 11, claim 10 is incorporated and the combination of Liu and Chen discloses minimizing the energy function comprises using a local-global solver to perform a local step and a global step, wherein the local step comprises computing an optimal rotation by solving a singular value decomposition and the global step comprises solving a system of linear equations (See claim 2 for detailed analysis.).
As to claim 12, claim 11 is incorporated and the combination of Liu and Chen discloses applying the positional control comprises satisfying at least one positional constraint by adding positional constraints when performing the global step (See claim 3 for detailed analysis.).
As to claim 13, claim 10 is incorporated and the combination of Liu and Chen discloses applying the scaling control comprises specifying a target scale at respective vertices of a mesh of the body part of the avatar body (See claim 4 for detailed analysis.).
As to claim 14, claim 10 is incorporated and the combination of Liu and Chen discloses applying the positional control comprises manipulating at least one of the shape and the position of the at least one component of the body part to satisfy at least one positional constraint (See claim 5 for detailed analysis.).
As to claim 15, claim 10 is incorporated and the combination of Liu and Chen discloses applying the orientation control comprises controlling surface normals of the at least one component to satisfy at least one orientation constraint (See claim 6 for detailed analysis.).
As to claim 16, the combination of Liu and Chen discloses a system, comprising: a memory with instructions stored thereon; and a processing device, coupled to the memory, the processing device configured to access the memory and execute the instructions, wherein the instructions cause the processing device to perform operations to stylize a three-dimensional (3D) avatar body comprising: performing at least one of: performing at least one from the group comprising: applying a scaling control to a body part of the avatar body to change a size of at least one component of the body part; applying a positional control to the body part of the avatar body to change at least one from the group of a shape of the at least one component of the body part, a position of the at least one component of the body part, and a combination thereof; applying an orientation control to the body part of the avatar body to change an orientation of the at least one component of the body part; and combinations thereof; and deforming the body part of the avatar body based on minimizing an energy function associated with at least one from the group comprising: the scaling control, the positional control, the orientation control, and combinations thereof (See claim 1 for detailed analysis.).
As to claim 17, claim 16 is incorporated and the combination of Liu and Chen discloses minimizing the energy function comprises using a local-global solver to perform a local step and a global step, wherein the local step comprises computing an optimal rotation by solving a singular value decomposition and the global step comprises solving a system of linear equations (See claim 2 for detailed analysis.).
As to claim 18, claim 17 is incorporated and the combination of Liu and Chen discloses applying the positional control comprises satisfying at least one positional constraint by adding positional constraints when performing the global step (See claim 3 for detailed analysis.).
As to claim 19, claim 16 is incorporated and the combination of Liu and Chen discloses applying the scaling control comprises specifying a target scale at respective vertices of a mesh of the body part of the avatar body (See claim 4 for detailed analysis.).
As to claim 20, claim 16 is incorporated and the combination of Liu and Chen discloses applying the orientation control comprises controlling surface normals of the at least one component to satisfy at least one orientation constraint (See claim 5 for detailed analysis.).
Claims 7 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US Pub 2022/0292773 A1) in view of Chen, Shu-Yu, et al. ("Rigidity controllable as-rigid-as-possible shape deformation." Graphical Models 91 (2017): 13-21.) and Sakaguchi (US Patent 6,310,627 B1).
As to claim 7, claim 6 is incorporated and the combination of Liu and Chen discloses controlling surface normals comprises setting target normals to closest (Chen, Page 13, “Laplacian deformation methods [5–8] have been explored extensively for surface based deformation.” Page 14, “One typical approach is to preserve the Laplacian differential coordinates [5–8] during the shape deformation. These differential coordinates based methods need the user to specify compatible positional and rotational constraints for deformation handles. As shown in [18] , incompatible constraints will introduce artifacts. Popa et al. [19] deform the shape with different material properties based on the deformation gradient method. Again, rotational constraints of the deformation handles should be assigned. Our method allows materials with different stiffness to be simulated, while only requiring positional constraints at handles which makes the modeling procedure much easier. For human body deformations, Murai et al. [20] propose a sophisticated mathematical model to learn parameters for simulating deformation dynamics of soft human tissues. Compared with this work, our work uses
a simpler model and can deal with general shapes.” Page 16, “Thanks to the normalization, we find a default set of the parameters works well for a wide range of shapes.”.)
The combination of Liu and Chen does not explicitly disclose s cubic shape. However, cubic shape is obvious because cubic is one of general shapes (For example, a voxel.).
Sakaguchi teaches a cubic shape (Sakaguchi, Col 24, lines 36-66, “The calculated lattice point ai corresponding to the lattice point ci has a normal vector in a direction opposite from a normal direction of the lattice point ci. If no lattice point ai exists in the normal direction of the lattice point ci, a small frame Q having a cubic shape is set around the lattice point ci as shown in FIG. 23. The lattice point ai of the standard figure model M having a normal vector in a direction opposite from the normal direction of the lattice point ci of the standard garment, which comes to be first included in the small frame during the enlargement of the small frame, becomes a lattice point corresponding to the lattice point ci.”)
Liu, Chen and Sakaguchi are considered to be analogous art because all pertain to 3D deformation. It would have been obvious before the effective filing date of the claimed invention to have modified Liu and Chen with the features of “a cubic shape” as taught by Sakaguchi. The claim would have been obvious because the substitution of one known element for another would have yielded predictable results to one of ordinary skill in the art at the time of the invention.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bradley et al. (US Pub 2023/0237753 A1) teaches a series of Laplacian mesh deformations (see e.g., [SCOL+4]) can be used to positionally constrain the facial hair free geometric elements of the reference 3D facial shape facial hair, while allowing the geometric elements corresponding to facial hair covered regions of the reference 3D facial shape to deform in a “semi-rigid” or “as-rigidly-as-possible” manner.
Singh et al. (US Pub 2023/0215094 A1) teaches Surface-aware deformations use local or global properties of the geometry being deformed to additionally inform the deformation mapping of points on the geometry.
Sachs et al. (US Pub 2019/0122411 A1) teaches animate the rigged or orientation-ready 3D model of the head mapping to rig parameters audio samples and/or video data.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YU CHEN/Primary Examiner, Art Unit 2613