Prosecution Insights
Last updated: April 19, 2026
Application No. 18/421,429

DATA-DRIVEN PHYSICS-BASED FACIAL ANIMATION RETARGETING

Non-Final OA §103
Filed
Jan 24, 2024
Examiner
CREARY, LATRELL ANTHONY
Art Unit
2613
Tech Center
2600 — Communications
Assignee
ETH ZÜRICH
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
24 granted / 33 resolved
+10.7% vs TC avg
Strong +45% interview lift
Without
With
+45.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
10 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
71.0%
+31.0% vs TC avg
§102
21.0%
-19.0% vs TC avg
§112
1.1%
-38.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chandran (US-20210279956-A1) in view of Zheng (Yufeng Zheng, Victoria Fernández Abrevaya, Marcel C. Bühler, Xu Chen, Michael J. Black, Otmar Hilliges; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 13545-13555) in further view of Isner (US-20070035541-A1) . Regarding claim 1 Chandran teaches A computer-implemented method comprising: generating, based on an input target facial identity, a facial identity code in an input identity latent space ( Para. 35-36 and Para. 45-46: Discloses an identity encoder generating an identity code representing a facial identity within a latent representation space.); generating one or more simulator control values based on the facial identity code and input source facial expression( Para. 37-38 and fig.2: Discloses identity and expression codes function as control parameters governing face generation). generating a simulated avatar based on one or more identity-specific control values, wherein each identity-specific control value corresponds to one or more of the simulator control values and is in an output facial identity space associated with an output target facial identity ( Para. 38: describes a decoder outputs face geometry using identity and expression codes. Para. 43: discloses a decoder generating faces having new identity and expressions. Para. 46: describes vertex displacements deform reference mesh into face. Chandran generates deformable face mesh geometry based on identity specific control values). Chandran fails to teach converting a spatial input point from an input facial identity space of the input target facial identity to a canonical-space point in a canonical space; generating one or more canonical simulator control values based on the canonical-space point; and explicit mentioning of a simulated active soft body. Zheng teaches converting a spatial input point from an input facial identity space of the input target facial identity to a canonical-space point in a canonical space ( Sec 3.1 ,-3.3 and fig.2: discloses mapping between deformed facial geometry space and canonical facial space, including finding canonical points corresponding to spatial points, which teaches converting spatial input points to canonical space points); generating one or more canonical simulator control values based the canonical-space point ( Sec 3.2: teaches generating control parameters ( Blend shape weights, pose parameters, deformation weights) based on canonical spatial points and expression parameters). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Chandran latent identity based facial generation system to include Zheng’s canonical space correspondence and deformation framework, because Zheng explicitly teaches canonical geometry representations that improve expression control and deformation modeling across identities and expressions, thereby improving the accuracy and controllability of generated facial models.). Chandran in view of Zheng fail to explicitly teach generating a simulated soft body based on one or more identity-specific control values. Isner teaches the generating a simulated soft body based on one or more identity-specific control values ( Para.6 discloses face animation that mimics soft tissue behavior and controls the three dimensional positions of points of a surface mesh. Para 28 and 34: discloses a system that automatically generates a soft tissue solver for animating a 3d mesh including a face mesh. The soft tissue solver includes animation controls that control objects and deformation operators deform mesh responsive to chance in the control objects. It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of Chandran in view of Zheng to incorporate the teachings of Isner, by using a soft tissue solver that performs 3D mesh deformation/animation based on animation controls. This Combination would help produce more realistic face mesh results). Regarding claim 2, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein the spatial input point is converted to the canonical-space point via execution of a material space to canonical space mapping network associated with the input target facial identity ( Zheng section 3.2 – 3.3 and fig 2: teaches using neural networks ( MLPS and deformation networks) that operate on facial geometry and map between deformed/material facial space and canonical space, thereby converting spatial input points into canonical space points using learned neural networks.). Regarding claim 3, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, further comprising: generating, based on the input source facial expression, a facial expression code in an expression latent space (Chandran, Para: 35-37 and para: 45-48 and fig 2: teaches generating a facial expression code using an expression encoder operating in a latent space), wherein the one or more canonical simulator control values are generated further based on the facial expression code ( Chandran, para 38 and 43: teaches that expression codes are used as control parameters for generating and deforming facial geometry. Zheng, section 3.2: teaches generating deformation control parameters in canonical space based on expression parameters, corresponding to canonical simulator control values. It would have been obvious to use Chandran expression latent code as input to Zheng canonical deformation framework to control deformation of canonical facial geometry) . Regarding claim 4, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 3, wherein the expression latent space and the identity latent space are latent control spaces of the physics simulator used to generate the simulated active soft body ( Chandran, Para 35-37 and para 45-48 and fig 2: teaches generating a facial expression code using an expression encoder operating in a latent space .Isner, Para 06, 29 and 34: teaches a soft tissue simulator whose deformation is controlled by control parameters. It would have been obvious to use Chandran’s identity and expression latent spaces as control parameters for a soft tissue simulation farmwork such as Isner's in order to provide realistic and controllable face deformation.). Regarding claim 5, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, further comprising converting the one or more canonical simulator control values from the canonical space to the one or more identity specific control values in the output facial identity space (Zheng, Sec 3.1-3.2 and fig.2: teaches converting canonical space control parameters (blend shapes, pose parameters, skinning weights) into identity specific deformed facial geometry). Regarding claim 6, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 5, wherein converting the one or more canonical simulator control values from the canonical space to the one or more identity specific control values in the output facial identity space comprises: identifying a material space to canonical space mapping network associated with the input target facial identity (Zheng, Section 3.2 and fig 2: teaches a neural deformation network mapping between canonical space and material ( deformed facial identity) space); and multiplying the one or more canonical simulator control values by a rotational component of a Jacobian of the material space to canonical space mapping network (Zheng: Section 3.3 and fig 2: Zheng computes a gradient transformations of canonical coordinates relative to deformation parameters. This Inherently include rotational components of the canonical to material mapping). Regarding claim 7, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein each identity-specific control value specifies an actuation that represents a deformation of the simulated active soft body ( Isner para. 29, claims 25 and fig.1 : the animation controls/ control objects are control values that actuate deformation of the soft tissue). Regarding claim 8, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein the output target facial identity is the input target facial identity, and the output facial identity space is the input facial identity space (Chandran, Para 35-45 and fig.3 : Teaches generating facial geometry using an identity latent code representing the input facial identity, where the decoder generates output facial geometry from the same identity latent space.). Regarding claim 9, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein the output target facial identity is different from the input target facial identity and is received as an input value (Chandran. Fig.2: discloses generating facial geometry using an identity encoder (152) that produces an identity code (206) which is provided as an input to decoder (156) to generate vertex displacements representing facial deformation. Because the identity code is an independent input to the decoder, a different identity code may be provided to generate facial geometry corresponding to a different facial identity output.). Regarding claim 10, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein the simulated soft body is generated using a physics simulator based on the one or more identity-specific control values and further based on one or more collision constraints (Isner, para 28-34: teaches generating deformation of a soft tissue/body mesh using a physics based simulation. Chandran fig.2: teaches identity specific control values controlling generated facial deformation. Isner para.3 and para 30-34: Physic-based deformation inherently includes constraints to prevent issues with the mesh during deformation. It would have been obvious to use Chandran identity specific control values as control inputs to the physics based simulator of Isner to generate an identity dependent soft body deformation. This combination would produce a more accurate model). Regarding claim 11, Chandran in view of Zheng and in further view of Isner teaches . The computer-implemented method of claim 1, wherein the one or more canonical simulator control values are generated via execution of an actuation neural network that receives a latent code and the canonical-space point as input, wherein the latent code is based on the expression latent code and the identity latent code(Zheng, sec 3.2 : discloses generating deformation control parameters including blend shape weights and skinning weights, using neural networks (MLPS) operating on canonical-space points. These neural networks receive canonical space coordinates as input and produce deformation control values for morphing. Chandran, fig.2: discloses generating identity latent codes and expression codes using neural network encoders and providing those latent codes as neural network inputs to control facial deformation. It would have been obvious to use Chandran identity and expression latent codes as latent inputs to Zheng’s canonical space deformation neural network to generate simulator control values. This combination would improve the system of Chandran by providing more efficient control of the avatar generation). Regarding claim 12, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 11, further comprising training the actuation neural network based on one or more losses associated with the simulated active soft body (Chandran, Para 35 teaches training neural networks using a model trainer. Para 39: teaches training neural networks that control deformation and generation of deformable facial geometry. Para 51: discloses training neural networks using multiple loss functions. Zheng Section 3.4: further teaches training deformation neural networks using multiple loss functions including RGB reconstruction loss, mask loss, and deformation- specific FLAME loss, to supervise deformation of the models. it would have been obvious to one of ordinary skill in the art to train Chandran’s neural network controlling deformable geometry using loss function associated with Isner’s simulated soft tissue deformation , as taught by Zheng in order to improve deformation accuracy). Regarding claim 13, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 12, wherein the one or more losses comprise a similarity loss that is computed based on the simulated active soft body and a captured shape associated with the input target facial identity (Chandran, Para. 39 and 36 teaches captured facial identity meshes used as shapes during training. Para. 51: discloses an L1 reconstruction loss is a similarity loss comparing predicted mesh geometry to ground truth mesh geometry ). Regarding claim 14, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein the canonical-space point is one of a plurality of canonical-space points, and generating the one or more canonical simulator control values is repeated for each of the plurality of canonical- space points (Zheng, Section 3.2 and fig.2: teaches multiple canonical space points, since occupancy values for each canonical 3D point. Section 3.3 and fig.2: discloses multiple canonical points are computed across many rays/pixels.). Regarding claim 15, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 14, wherein the one or more canonical simulator control values comprise one or more canonical actuation values generated by executing the actuation network for each canonical-space point in the plurality of canonical-space points (Zheng, Section 3.2: Discloses generation deformation control values for each canonical space point through neural networks). Regarding claim 16, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein the input facial identity space of the input target facial identity is a soft tissue space of the target identity (Chandran. Para 36: the identity code represents the facial identity in a learned identity space. Para.45 : discloses a facial identity latent space. Isner, Para. 07-08: teaches a soft tissue deformation space representing facial identity geometry. It would have been obvious to modify the system of Chandran with Isner to produce a realistic and accurate facial animation deformation). Regarding claim 17, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein a shape of the simulated active soft body is determined in accordance with the one or more identity-specific control values (Chandran, Para 36: the identity code is an identity specific control value. Para.38: the shape of the deformable face mesh is determined based on the identity code. Para 43: blend weights are explicitly values used to help determine mesh shape. Isner, Para. 7-8: teaches a soft tissue deformation space representing facial identity geometry). Regarding claim 18, Chandran in view of Zheng and in further view of Isner teaches The computer-implemented method of claim 1, wherein the simulated active soft body has a facial expression that is semantically equivalent to the input source facial expression (Chandran, Para 37: Expression code represents semantic content of the input expression. Para.38 and para 51: the output mesh has the same facial expression represented by the input expression code.). Regarding claim 19, Chandran teaches One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of (Para 116: Non-transitory computer readable medium): generating, based on an input target facial identity, a facial identity code in an input identity latent space( Para. 35-36 and Para. 45-46: Discloses an identity encoder generating an identity code representing a facial identity within a latent representation space.); generating one or more simulator control values based on the facial identity code and input source facial expression( Para. 37-38 and fig.2: Discloses identity and expression codes function as control parameters governing face generation). generating a simulated avatar based on one or more identity-specific control values, wherein each identity-specific control value corresponds to one or more of the simulator control values and is in an output facial identity space associated with an output target facial identity ( Para. 38: describes a decoder outputs face geometry using identity and expression codes. Para. 43: discloses a decoder generating faces having new identity and expressions. Para. 46: describes vertex displacements deform reference mesh into face. Chandran generates deformable face mesh geometry based on identity specific control values). Chandran fails to teach converting a spatial input point from an input facial identity space of the input target facial identity to a canonical-space point in a canonical space; generating one or more canonical simulator control values based on the canonical-space point; and explicit mentioning of a simulated active soft body. Zheng teaches converting a spatial input point from an input facial identity space of the input target facial identity to a canonical-space point in a canonical space ( Sec 3.1 ,-3.3 and fig.2: discloses mapping between deformed facial geometry space and canonical facial space, including finding canonical points corresponding to spatial points, which teaches converting spatial input points to canonical space points); generating one or more canonical simulator control values based the canonical-space point ( Sec 3.2: teaches generating control parameters ( Blend shape weights, pose parameters, deformation weights) based on canonical spatial points and expression parameters). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Chandran latent identity based facial generation system to include Zheng’s canonical space correspondence and deformation framework, because Zheng explicitly teaches canonical geometry representations that improve expression control and deformation modeling across identities and expressions, thereby improving the accuracy and controllability of generated facial models.). Chandran in view of Zheng fail to explicitly teach generating a simulated soft body based on one or more identity-specific control values. Isner teaches the generating a simulated soft body based on one or more identity-specific control values ( Para.6 discloses face animation that mimics soft tissue behavior and controls the three dimensional positions of points of a surface mesh. Para 28 and 34: discloses a system that automatically generates a soft tissue solver for animating a 3d mesh including a face mesh. The soft tissue solver includes animation controls that control objects and deformation operators deform mesh responsive to chance in the control objects. It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of Chandran in view of Zheng to incorporate the teachings of Isner, by using a soft tissue solver that performs 3D mesh deformation/animation based on animation controls. This combination would help produce more realistic face mesh results). Regarding claim 20, Chandran teaches A system, comprising: one or more memories that store instructions, and one or more processors that are coupled to the one or more memories and, when executing the instructions, are configured to perform the steps of (Para 116: Non-transitory computer readable medium): generating, based on an input target facial identity, a facial identity code in an input identity latent space ( Para. 35-36 and Para. 45-46: Discloses an identity encoder generating an identity code representing a facial identity within a latent representation space.); generating one or more simulator control values based on the facial identity code and input source facial expression( Para. 37-38 and fig.2: Discloses identity and expression codes function as control parameters governing face generation). generating a simulated avatar based on one or more identity-specific control values, wherein each identity-specific control value corresponds to one or more of the simulator control values and is in an output facial identity space associated with an output target facial identity ( Para. 38: describes a decoder outputs face geometry using identity and expression codes. Para. 43: discloses a decoder generating faces having new identity and expressions. Para. 46: describes vertex displacements deform reference mesh into face. Chandran generates deformable face mesh geometry based on identity specific control values). Chandran fails to teach converting a spatial input point from an input facial identity space of the input target facial identity to a canonical-space point in a canonical space; generating one or more canonical simulator control values based on the canonical-space point; and explicit mentioning of a simulated active soft body. Zheng teaches converting a spatial input point from an input facial identity space of the input target facial identity to a canonical-space point in a canonical space ( Sec 3.1 ,-3.3 and fig.2: discloses mapping between deformed facial geometry space and canonical facial space, including finding canonical points corresponding to spatial points, which teaches converting spatial input points to canonical space points); generating one or more canonical simulator control values based the canonical-space point ( Sec 3.2: teaches generating control parameters ( Blend shape weights, pose parameters, deformation weights) based on canonical spatial points and expression parameters). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Chandran latent identity based facial generation system to include Zheng’s canonical space correspondence and deformation framework, because Zheng explicitly teaches canonical geometry representations that improve expression control and deformation modeling across identities and expressions, thereby improving the accuracy and controllability of generated facial models.). Chandran in view of Zheng fail to explicitly teach generating a simulated soft body based on one or more identity-specific control values. Isner teaches the generating a simulated soft body based on one or more identity-specific control values ( Para.6 discloses face animation that mimics soft tissue behavior and controls the three dimensional positions of points of a surface mesh. Para 28 and 34: discloses a system that automatically generates a soft tissue solver for animating a 3d mesh including a face mesh. The soft tissue solver includes animation controls that control objects and deformation operators deform mesh responsive to chance in the control objects. It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of Chandran in view of Zheng to incorporate the teachings of Isner, by using a soft tissue solver that performs 3D mesh deformation/animation based on animation controls. This Combination would help produce more realistic face mesh results). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LATRELL ANTHONY CREARY whose telephone number is (703)756-1219. The examiner can normally be reached Mon - Fri 7:30am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao WU can be reached on (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LATRELL ANTHONY CREARY/Examiner, Art Unit 2613 /XIAO M WU/ Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602862
LIGHT NORMALIZATION IN COMBINED 3D USER REPRESENTATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12592027
THREE-DIMENSIONAL RECONSTRUCTION METHOD, THREE-DIMENSIONAL RECONSTRUCTION APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586279
ANIMATION COMPOSITOR FOR DIGITAL AVATARS
2y 5m to grant Granted Mar 24, 2026
Patent 12562109
DISPLAY DEVICE, AND CONTROL METHOD OF DISPLAY DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12544202
ORAL IMAGE PROCESSING DEVICE AND ORAL IMAGE PROCESSING METHOD
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+45.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month