DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 9-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Prashanth et al. (“Local Anatomically-Constrained Facial Performance Retargeting”) in view of Chen et al. (US 20230260184 A1).
Regarding Claim 1, Prashanth teaches A computer-implemented method (“Character facial animation is a key aspect of many computer graphics applications” Prashanth 1 Introduction.) for performing local facial rig generation (“Local Anatomically-Constrained Facial Performance Retargeting” Prashanth Title. “Our offline algorithm leverages the expressive power of local blendshape rigs to obtain an initial estimate of the retargeted performance. Then in a second step, an anatomical model built using the target character’s facial geometry is used to constrain the retargeted performance to an anatomically plausible subspace. The result is a powerful method that can perform highly realistic retargeting given only a handful of shapes in correspondence (20 shapes) when compared to full blown production rigs with hundreds of shapes..” Prashanth Conclusion. ), the method comprising:
generating a blendshape model including a plurality of vertices, a plurality of meshes, and a plurality of patches (
Prashanth 3.1 Model Setup:
PNG
media_image1.png
352
334
media_image1.png
Greyscale
, where the equation (2) is mapped to a blendshape model, and where p represents an individual patch.
PNG
media_image2.png
202
60
media_image2.png
Greyscale
, where the figure visually illustrates the patches of a source face model and a target face model.
These face models comprises meshes and vertices:
“Note that the source and target meshes do not need to share the same topology, however we require a consistent mapping between the patches of the source and target models. This is easily achieved if the meshes do all share the same topology, or a UV layout.” Prashanth 3.1 Model Setup.
Prashanth 3.3:
PNG
media_image3.png
132
336
media_image3.png
Greyscale
);
modifying one or more blendweight values (
PNG
media_image4.png
40
50
media_image4.png
Greyscale
) associated with each of the plurality of patches (as shown equation (2)
PNG
media_image5.png
94
592
media_image5.png
Greyscale
) based on a plurality of facial depictions included in a facial data collection and one or more sample depictions of a target character (
“Let S be the set of source shapes, and T be the set of target shapes, such that S𝑖 portrays the same expression as T𝑖 . Without loss of generality, let S0 and T0 be the neutral expressions. The sets S and T should be defined as triangle meshes at the origin of a common canonical coordinate frame.” Prashanth 3.1 Model Setup.
The facial depictions are mapped to S (S0, …Si…), which is a facial data collection.
The sample depictions are mapped to T (T0, …Ti…) of a target character.
PNG
media_image6.png
202
334
media_image6.png
Greyscale
, where expressions of S have been retargeted to those of T of target character.
The blendweight values (
PNG
media_image4.png
40
50
media_image4.png
Greyscale
) are based on S (facial depictions) and T (sample depictions), because of Prashanth 3.2 Patch-wise Retargeting, which states, “At a high level, we approach the problem by estimating the coefficients 𝛼 of all the patches of the source model (Eq. 1) that can accurately describe the local skin deformations required to match the shape X𝑆’. We then transfer these coefficients
to the target model (Eq. 2) to obtain an estimate of the retargeted expression. During this process, we will add several methods to artistically control the result.”);
generating an output facial rig model (equation (2) with transferred
PNG
media_image4.png
40
50
media_image4.png
Greyscale
) based on the blendshape model (as shown equation (2)
PNG
media_image5.png
94
592
media_image5.png
Greyscale
) and the one or more modified blendweight values (transferred
PNG
media_image4.png
40
50
media_image4.png
Greyscale
coefficients according to Prashanth 3.2 Patch-wise Retargeting) (
Prashanth 3.2 Patch-wise Retargeting, which states, “At a high level, we approach the problem by estimating the coefficients 𝛼 of all the patches of the source model (Eq. 1) that can accurately describe the local skin deformations required to match the shape X𝑆’. We then transfer these coefficients to the target model (Eq. 2) to obtain an estimate of the retargeted expression. During this process, we will add several methods to artistically control the result.”); and
generating one or more expressive depictions of the target character based at least on the output facial rig (
PNG
media_image6.png
202
334
media_image6.png
Greyscale
, where expressions of S have been retargeted to those of T of target character in the second row.).
Prashanth does not explicitly disclose that the facial data collection is stored in a database.
Chen teaches the facial data collection could be stored in a database (“Storage 1303 may be organized as a file system, database, or in other ways. Data, instructions, and information may be loaded from storage 1303 into volatile memory 1302 for processing by the processor 1301.” Chen ¶ 108.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen’s database with Prashanth. One of ordinary skill in the art would be motivated to organize data, to efficiently access data, and/or to reuse data. “Storage 1303 may be organized as a file system, database, or in other ways. Data, instructions, and information may be loaded from storage 1303 into volatile memory 1302 for processing by the processor 1301.” Chen ¶ 108.
Regarding Claim 2, Prashanth further teaches The computer-implemented method of claim 1,
wherein each facial depiction included in the facial database is associated with one of multiple identities (Prashanth, Figs. 5-7. For example:
PNG
media_image7.png
310
634
media_image7.png
Greyscale
, where source 1 and source 2 represents two identities.
“Let S be the set of source shapes, and T be the set of target shapes, such that S𝑖 portrays the same expression as T𝑖 . Without loss of generality, let S0 and T0 be the neutral expressions. The sets S and T should be defined as triangle meshes at the origin of a common canonical coordinate frame.” Prashanth 3.1 Model Setup.
The facial depictions are mapped to S (S0, …Si…). S is for one identity as shown in fig. 3:
PNG
media_image6.png
202
334
media_image6.png
Greyscale
.
Where there are multiple identities, there are separate sets of S (S0, …Si…).).
Regarding Claim 3, Prashanth further teaches The computer-implemented method of claim 2,
wherein the facial database includes a neutral facial expression associated with one of the multiple identities and one or more expressive facial expressions associated with the one of the multiple identities (
“Let S be the set of source shapes, and T be the set of target shapes, such that S𝑖 portrays the same expression as T𝑖 . Without loss of generality, let S0 and T0 be the neutral expressions. The sets S and T should be defined as triangle meshes at the origin of a common canonical coordinate frame.” Prashanth 3.1 Model Setup.
The neutral facial expression corresponds to S0. The Examiner has explained that S0 is associated with a selected identity as shown in fig. 3 from different identities shown in Figs. 5-7.
The expressive facial expressions are mapped to Si, associated with a selected identity as shown in fig. 3 from different identities shown in Figs. 5-7.).
Regarding Claim 4, Prashanth further teaches The computer-implemented method of claim 1,
wherein the one or more blendweight values associated with each of the plurality of patches define a weighted linear combination of corresponding patches associated with the plurality of facial depictions included in the facial database (
PNG
media_image8.png
78
468
media_image8.png
Greyscale
, wherein
PNG
media_image4.png
40
50
media_image4.png
Greyscale
has been mapped to blendweight values; wherein p is a patch; wherein equation (1) is a linear combination; and wherein Si and S0 are facial depictions in the facial database.).
Regarding Claim 5, Prashanth further teaches The computer-implemented method of claim 1,
further comprising modifying one or more vertex positions associated with the plurality of vertices included in the blendshape model (
Prashanth 3.3:
PNG
media_image3.png
132
336
media_image3.png
Greyscale
, wherein vertex positions are modified as the expressions of the model changes as shown in fig. 3:
PNG
media_image9.png
378
626
media_image9.png
Greyscale
).
Regarding Claim 6, Prashanth further teaches The computer-implemented method of claim 1, wherein each of the plurality of patches is associated with a region included in the blendshape model (
Prashanth 3.1 Model Setup:
PNG
media_image1.png
352
334
media_image1.png
Greyscale
, where the equation (2) is mapped to a blendshape model, and where p represents an individual patch.
PNG
media_image2.png
202
60
media_image2.png
Greyscale
, where the figure visually illustrates the patches of a source face model and a target face model.).
Regarding Claim 7, Prashanth further teaches The computer-implemented method of claim 1, wherein generating the one or more expressive depictions of the target character is further based on an expression delta (
[BRI on the record] With respect to “expression delta,” the Examiner is reading the limitation to mean: positional differences between expression models.
[0039] In this embodiment, blendshape engine 122 transfers expression deltas from a generic prior model (not shown) to optimized blendshape model 210 to generate target character expressions. An expression delta describes positional differences between the vertex positions included in a neutral facial depiction of the generic prior model and an expressive depiction of the generic prior model. For example, given a generic prior model that includes both a neutral facial depiction and an expressive depiction of the generic prior model smiling, blendshape engine 122 may calculate a smile expression delta based on the different positions of corresponding vertices in the neutral facial depiction and the expressive smiling depiction. Blendshape engine 122 may apply the smile expression delta to optimized blendshape model 210 and generate a model representing the target character smiling. The method steps included in this embodiment of the present invention are discussed below in the description of FIG. 3.
Spec. ¶ 39.
[Mapping Analysis]
PNG
media_image5.png
94
592
media_image5.png
Greyscale
, wherein
PNG
media_image10.png
40
122
media_image10.png
Greyscale
or
PNG
media_image11.png
78
208
media_image11.png
Greyscale
is mapped to expression delta, wherein
PNG
media_image12.png
40
40
media_image12.png
Greyscale
is mapped to an expressive depiction of a target character.
Prashanth 3.3:
PNG
media_image3.png
132
336
media_image3.png
Greyscale
, where the patches are expressed with vertices.).
Regarding Claim 9, Prashanth in view of Chen teaches The computer-implemented method of claim 1, wherein modifying the one or more blendweight values is based on minimizing one or more energy value functions (
PNG
media_image13.png
240
626
media_image13.png
Greyscale
PNG
media_image14.png
426
624
media_image14.png
Greyscale
The Examiner takes an Official Notice that an energy function may be minimized to determine parameters. The motivation of combining this well-known knowledge would have been that an optimal/fitting solution could be found. Here, the energy function reflects some of the priorities when fitting models or finding solutions.).
Regarding Claim 10, Prashanth teaches The computer-implemented method of claim 1,
wherein the one or more sample depictions of the target character do not include a neutral depiction of the target character (
“Let S be the set of source shapes, and T be the set of target shapes, such that S𝑖 portrays the same expression as T𝑖 . Without loss of generality, let S0 and T0 be the neutral expressions. The sets S and T should be defined as triangle meshes at the origin of a common canonical coordinate frame.” Prashanth 3.1 Model Setup.
The sample depictions could be mapped to (T1, …Ti…) of a target character, and (T1, …Ti…) does not include T0, neutral depiction.
Note that Claim 1 recites, “modifying one or more blendweight values . . . based on . . . one or more sample depictions of a target character” and Claim 9 recites, “wherein the one or more sample depictions . . . do not include a neutral depiction of the target character.” The combination of these limitations does not require that “modifying the one or more blendweight values . . . not based on the “neutral depiction of the target character.”
For example, we have “(a) A is based on B and C. (b) C does not include B. The combination of (a) and (b) does not require A be not based on B.”).
Claims 11-17 are substantially similar to Claims 1-7. The rejection analyses of Claims 1-7 based on Prashanth in view of Chen are applied to Claims 11-17. In addition, Claims 11 recites, “One or more non-transitory computer-readable media containing instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of . . .” (Prashanth 1 Introduction: “Character facial animation is a key aspect of many computer graphics applications.” Chen ¶ 21: “Some embodiments are implemented by a computer system. A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods and steps described herein.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen’s use of computer with Prashanth. One of ordinary skill in the art would be motivated to make calculation faster and more reliable.
Claims 19-20 are substantially similar to Claims 1 and 4. The rejection analyses of Claims 1 and 4 are applied to Claims 19-20. In addition, Claims 19 recites, “A system comprising: one or more memories for storing instructions; and one or more processors for executing the instructions to: . . .” (Prashanth 1 Introduction: “Character facial animation is a key aspect of many computer graphics applications.” Chen ¶ 21: “Some embodiments are implemented by a computer system. A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods and steps described herein.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen’s use of computer with Prashanth. One of ordinary skill in the art would be motivated to make calculation faster and more reliable.
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Prashanth in view of Chen as applied to Claims 7 and 17, in further view of Miller (US 20210166459 A1).
Regarding Claim 8, Prashanth further teaches The computer-implemented method of claim 7,
wherein the expression delta defines one or more vertex position differences associated with vertices included in a neutral depiction of a generic prior model and corresponding vertices included in an expressive depiction of the generic prior model (
PNG
media_image5.png
94
592
media_image5.png
Greyscale
, wherein
PNG
media_image10.png
40
122
media_image10.png
Greyscale
is mapped to expression delta, where
PNG
media_image15.png
32
36
media_image15.png
Greyscale
corresponds to the neutral depiction and
PNG
media_image16.png
30
38
media_image16.png
Greyscale
corresponds to the expressive depiction of a generic prior model.
Prashanth 3.3:
PNG
media_image3.png
132
336
media_image3.png
Greyscale
, where the patches are expressed with vertices. Therefore,
PNG
media_image10.png
40
122
media_image10.png
Greyscale
defines vertex position differences.).
Prashanth in view of Chen’s disclosure is not explicit that
PNG
media_image10.png
40
122
media_image10.png
Greyscale
defines vertex position differences, although it is the understanding in the art.
Miller teaches that the blendshape of Prashanth in view of Chen’s could define vertex position differences (
“In addition to skeletal systems, ‘blendshapes’ can also be used in rigging to produce mesh deformations. A blendshape (sometimes also called a ‘morph target’ or just a ‘shape’) is a deformation applied to a set of vertices in the mesh where each vertex in the set is moved a specified amount in a specified direction based upon a weight. Each vertex in the set may have its own custom motion for a specific blendshape, and moving the vertices in the set simultaneously will generate the desired shape. The custom motion for each vertex in a blendshape can be specified by a ‘delta,’ which is a vector representing the amount and direction of XYZ motion applied to that vertex. Blendshapes can be used to produce, for example, facial deformations to move the eyes, lips, brows, nose, dimples, etc., just to name a few possibilities.” Miller ¶ 148.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Miller’s delta calculation with Prashanth in view of Chen. One of ordinary skill in the art would be motivated to flexibly, expressively, and accurately control a model’s facial expression. “Blendshapes are useful for deforming the mesh in an art-directable way.” Miller ¶ 149. “Blendshapes can be used to produce, for example, facial deformations to move the eyes, lips, brows, nose, dimples, etc., just to name a few possibilities.” Miller ¶ 148.
Claim 18 is substantially similar to Claim 8. The rejection analyses of Claim 8 is applied to Claim 18.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENGXI LIU whose telephone number is (571)270-7509. The examiner can normally be reached M-F 9 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZHENGXI LIU/Primary Examiner, Art Unit 2611