Prosecution Insights
Last updated: April 19, 2026
Application No. 18/410,755

SYSTEM AND METHOD FOR ANIMATING SECONDARY FEATURES

Non-Final OA §102§103
Filed
Jan 11, 2024
Examiner
CLOTHIER, MATTHEW MORRIS
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Digital Domain Virtual Human (Us) Inc.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
1y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
3 granted / 3 resolved
+38.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 11m
Avg Prosecution
29 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
65.2%
+25.2% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 1. The information disclosure statements (IDS) submitted on 3/15/2024 and 8/12/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner. Specification 2. The disclosure is objected to because of the following informalities: In [0066], page 17, line 32, "an eyelashes" should read "an eyelash" In [0085], page 24, line 27, "such futher derived" should read "such further derived" In [0095], page 28, line 26, "build a input" should read "build an input" In [0115], page 47, line 2, "recontrusting" should read " reconstructing" Appropriate correction is required. Claim Objections 3. Claim 36 is objected to because of the following informalities: In claim 36, line 5, "offsets of (e.g. recontrusting)" should read "offsets of (e.g. reconstructing)" Appropriate correction is required. Claim Rejections - 35 USC § 102 4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claims 21-31 and 43-44 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tena et al. (US-8922553-B1, hereinafter "Tena"). 6. As per claim 21, Tena discloses: A method for generating one or more frames of computer-based facial animation, the method comprising: (Tena, col. 9, lines 47-49, “The one or more animation systems 150 generate frames of the animation based on the animation cues or paths.”) obtaining, at a processor, a plurality of frames of facial animation training data, the facial animation training data comprising, for each of the plurality of frames of facial animation training data: (Tena, col. 14, lines 20-31, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like. In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.” and col. 14, lines 12-15, “Implementations of or processing in method 400 depicted in FIG. 4 may be performed by software (e.g., instructions or code modules) when executed by a central processing unit (CPU or processor) of a logic machine ...”) a training representation of a training primary face geometry comprising geometric information for a training plurality of primary face vertices; and (Tena, col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.” and col. 14, lines 56-61, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face. In step 430, a plurality of regions are identified based on the training data.”) a corresponding training representation of a training secondary facial component geometry comprising geometric information for a training plurality of secondary facial component vertices; and (Tena, col. 14, lines 26-31, “In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.” and col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.” and col. 14, lines 56-61, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face. In step 430, a plurality of regions are identified based on the training data.” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) the facial animation training data further comprising a subset index comprising indices of a subset of the training plurality of primary face vertices; (Tena, col. 12, lines 37-39, “Segmentation of a face into multiple sub-models by system 100 also allows user interaction to modify the model at a local level ...” and col. 14, lines 56-59, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face.” and col. 14, lines 38-41, “In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.” and col. 16, lines 53-65, “For region-based models according to techniques of this disclosure, equation (8) can be extended to each of multiple sub-models and border constraints are incorporated as shown in equation (9): ... where vi is the kth user-given constraint for the ith model, Bki is the corresponding basis, and c0i is the initial model parameters of the ith model.” Examiner’s note: The indexed variable vi contains a subset of spatial coordinate vertices in the facial mesh.) training, by the processor, a secondary facial component model using the facial animation training data; (Tena, col. 14, lines 56-59, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face.” and col. 15, lines 5-8, “Returning to FIG. 4, in step 440, a linear model is generated for each identified region based on the training data. Each linear model may represent behavior of the region as learned from the training data.” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) obtaining, at the processor, one or more frames of primary face animation, each of the one or more frames of primary face animation comprising an animation representation of an animation primary face geometry comprising geometric information for an animation plurality of primary face vertices; for each of the one or more frames of primary face animation: (Tena, col. 14, lines 20-31, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like. In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.” and col. 14, lines 12-15, “Implementations of or processing in method 400 depicted in FIG. 4 may be performed by software (e.g., instructions or code modules) when executed by a central processing unit (CPU or processor) of a logic machine ...”) generating, by the processor, a corresponding frame of secondary facial component animation based on the frame of primary face animation and the secondary facial component model, the corresponding frame of the secondary facial component animation comprising an animation representation of an animation secondary facial component geometry comprising geometric information for an animation plurality of secondary facial component vertices wherein the secondary facial component geometry is based on the animation primary face geometry. (Tena, col. 9, lines 39-60, “In various embodiments, the one or more animation systems 150 may be configured to enable users to manipulate controls or animation variables or utilized character rigging to specify one or more key frames of animation sequence. The one or more animation systems 150 generate intermediary frames based on the one or more key frames. In some embodiments, the one or more animation systems 150 may be configured to enable users to specify animation cues, paths, or the like according to one or more predefined sequences. The one or more animation systems 150 generate frames of the animation based on the animation cues or paths. In further embodiments, the one or more animation systems 150 may be configured to enable users to define animations using one or more animation languages, morphs, deformations, or the like.” and col. 12, lines 37-43, “Segmentation of a face into multiple sub-models by system 100 also allows user interaction to modify the model at a local level, which would not be possible with a holistic model as further explain below. In the context of face-posing for keyframe animation, system 100 generates region-based models that are locally intuitive and globally consistent.” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) 7. As per claim 22, Tena discloses: A method for training a secondary facial component model for use in computer-based facial animation wherein the secondary facial component model takes as input one or more frames of primary face animation, each of the one or more frames of primary face animation comprising an animation representation of an animation primary face geometry comprising geometric information for an animation plurality of primary face vertices and outputs, for each of the one or more frames of primary face animation, a corresponding frame of secondary facial component animation comprising an animation representation of an animation secondary facial component geometry comprising geometric information for an animation plurality of secondary facial component vertices wherein the secondary facial component geometry takes into account the primary face geometry, the method comprising: (Tena, col. 14, lines 20-31, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like. In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.” and, col. 14, lines 56-59, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face.” and col. 15, lines 5-8, “Returning to FIG. 4, in step 440, a linear model is generated for each identified region based on the training data. Each linear model may represent behavior of the region as learned from the training data.” and col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face).” and, col. 12, lines 37-39, “Segmentation of a face into multiple sub-models by system 100 also allows user interaction to modify the model at a local level ...” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) obtaining, at a processor, a plurality of frames of facial animation training data, the facial animation training data comprising, for each of the plurality of frames of facial animation training data: (Tena, col. 14, lines 20-31, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like. In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.” and col. 14, lines 12-15, “Implementations of or processing in method 400 depicted in FIG. 4 may be performed by software (e.g., instructions or code modules) when executed by a central processing unit (CPU or processor) of a logic machine ...”) a training representation of a training primary face geometry comprising geometric information for a training plurality of n primary face vertices; and (Tena, col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.” and col. 14, lines 56-61, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face. In step 430, a plurality of regions are identified based on the training data.”) a corresponding training representation of a training secondary facial component geometry comprising geometric information for a training plurality of m secondary facial component vertices; and (Tena, col. 14, lines 26-31, “In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.” and col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.” and col. 14, lines 56-61, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face. In step 430, a plurality of regions are identified based on the training data.” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) the facial animation training data further comprising a subset index comprising indices of a subset p of the training plurality of n primary face vertices, where p≤n; (Tena, col. 12, lines 37-39, “Segmentation of a face into multiple sub-models by system 100 also allows user interaction to modify the model at a local level ...” and col. 14, lines 56-59, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face.”) training the secondary facial component model using the facial animation training data, wherein training the secondary facial component model using the facial animation training data comprises: (Tena, col. 14, lines 56-59, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face.” and col. 15, lines 5-8, “Returning to FIG. 4, in step 440, a linear model is generated for each identified region based on the training data. Each linear model may represent behavior of the region as learned from the training data.” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) performing a matrix decomposition (e.g. principal component analysis (PCA), independent component analysis (ICA), non-negative matrix factorization (NMF), any other suitable matrix decomposition or dimensionality reduction technique and/or the like) of a combined training matrix which includes: (Tena, Abstract, “In various embodiments, a modeling system generates a spatially local PCA model where the parts are connected with soft constraints in the boundaries.” and col. 12, lines 34-37, “In one example, a PCA region-based face model generated by system 100 may use dense facial motion capture data and be flexible enough to generalize to multiple people.”) a plurality of f frames, each of the plurality of f frames comprising p primary face training vertex locations corresponding to the subset p of the plurality of n primary face vertices; and m secondary facial component training vertex locations; (Tena, col. 14, lines 20-31, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like. In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.” and, col. 14, lines 56-59, “To create a region based face model in one embodiment, different sections of v can be independently modeled. Each section may contain a subset of the vertices that compose the full face.” and col. 15, lines 5-8, “Returning to FIG. 4, in step 440, a linear model is generated for each identified region based on the training data. Each linear model may represent behavior of the region as learned from the training data.” and col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face).” and, col. 12, lines 37-39, “Segmentation of a face into multiple sub-models by system 100 also allows user interaction to modify the model at a local level ...” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) to yield a combined matrix decomposition; generating the secondary facial component model based on the combined matrix decomposition. (Tena, col. 20, line 46-col. 21, line 2, “The mathematical formulation presented herein allows the user to constrain one, multiple, all, or none of the vertices of the model's face mesh. The model's equation finds the best solution, in a soft least mean squares sense, that satisfies the user-provided constraints (e.g., an input data set) and the continuity constraints associated with regions of the model (e.g., boundary constraints and parameter space constraints being the last two terms in equation (10)). The user can produce different model behaviors by adjusting its intrinsic parameters β (boundary strength) and γ (rigidity). In one aspect, a high value of β combined with low γ produces holistic behavior by enforcing boundary consistency and freeing changes in the local parameter space. Relaxing the boundary strength while increasing rigidity allows to mold the face model without the use of constraints because the local sub-models compromise their boundaries in the interest of maintaining their current configuration. Intermediate values of β and γ allow the user to pin the face to a particular configuration by explicitly constraining its vertices. FIG. 12 is a graph depicting how reconstruction error and error at inter-region boundaries change as β varies in one embodiment. FIGS. 13A, 13B, and 13C show the results of face posing experiments using various combinations of β and γ.” and Abstract, “In various embodiments, a modeling system generates a spatially local PCA model where the parts are connected with soft constraints in the boundaries.” and col. 12, lines 34-37, “In one example, a PCA region-based face model generated by system 100 may use dense facial motion capture data and be flexible enough to generalize to multiple people.”) 8. As per claim 23, Tena discloses: The method of claim 22 wherein obtaining the plurality of frames of facial animation training data comprises receiving the training representation of the training primary face geometry and the training representation of the training second facial component geometry from a computer-implemented animation rig. (Tena, col. 9, lines 39-42, “In various embodiments, the one or more animation systems 150 may be configured to enable users to manipulate controls or animation variables or utilized character rigging to specify one or more key frames of animation sequence.” and col. 11, lines 45-50, “In the production of computerized facial animation, a common practice is to use blendshape animation models (or rigs). These models aim to represent a given facial configuration as a linear combination of a predetermined subset of facial poses that define the valid space of facial expressions” and col. 14, lines 20-26, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like.”) 9. As per claim 24, Tena discloses: The method of claim 22 wherein obtaining the plurality of frames of facial animation training data comprises receiving the training representation of the training primary face geometry and the training representation of the training second facial component geometry at least in part from user input from an artist. (Tena, col. 1, lines 31-38, “In one instance, a user (e.g., a skilled computer graphics artist) may specify the mathematical description of various objects, such as the geometry and/or topology of characters, props, backgrounds, scenes, or the like. In another instance, a user (e.g., an articulator or rigger) may specify a number of model components or animation control variables (avars) that may be used to position all or part of a model or otherwise manipulate aspects of the model.” and col. 14, lines 20-26, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like.”) 10. As per claim 25, Tena discloses: The method of claim 22 wherein the subset p of the training plurality of n primary face vertices are selected by a user as being relevant to the secondary facial component geometry. (Tena, col. 22, lines 3-7, “The system allows the user to specify constraints by clicking on a face vertex and then dragging it to the desired location. Once the vertex is released, consecutive constraints may be added in the same manner to sculpt the desired pose.” and col. 5, lines 4-8, “In another aspect, determining the plurality of regions for the surface of the computer-generated face based on training data may include receiving information from a user guiding identification of at least one region in the plurality of regions.” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) 11. As per claim 26, Tena discloses: The method of claim 22 comprising selecting the subset p from among the training plurality of n primary face vertices based on proximity (e.g. within a proximity threshold or selected as the most proximate p primary face vertices) to the secondary facial component geometry. (Tena, col. 5, lines 1-4, “In further embodiments, determining a plurality of regions for the surface of the computer-generated face may include determining the plurality of regions using spectral clustering on a set of affinity matrices.” and col. 4, lines 43-47, “In other embodiment, using spatially local models of a surface of a computer-generated face, a second data set may be received. The second data set may define one or more manipulations to be performed to the surface of the computer-generated face.”) 12. Claim 27, which are similar in scope to claim 25, is thus rejected under the same rationale as described above. 13. As per claim 28, Tena discloses: The method of claim 25 wherein obtaining the plurality of frames of facial animation training data comprises at least one of obtaining or converting the training representation of the training primary face geometry to, for each of the one or more frames of facial animation training data, locations of the training plurality of n primary face training vertices, each primary face training vertex location comprising 3 coordinates. (Tena, col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.” and col. 14, lines 20-31, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like. In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.”) 14. As per claim 29, Tena discloses: The method of claim 25 wherein obtaining the plurality of frames of facial animation training data comprises at least one of obtaining or converting the training representation of the training primary face geometry to, for each of the one or more frames of facial animation training data, locations for each of the subset of p primary face vertices, each subset vertex location comprising 3 coordinates. (Tena, col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.” and col. 14, lines 20-31, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like. In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.”) 15. As per claim 30, Tena discloses: The method of claim 28 wherein obtaining the plurality of frames of facial animation training data comprises at least one of obtaining or converting the training representation of the training secondary facial component geometry to, for each of the one or more frames of facial animation training data, locations of the plurality of m secondary facial component training vertices, each secondary facial component training vertex location comprising 3 coordinates. (Tena, col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.” and col. 14, lines 20-31, “In step 420, training data associated with a surface of a computer-generated face is received. The training data may include time-based position information for all or part of the surface of the computer-generated face. The training data may be obtained from a variety of sources, such as manually generated, procedurally generated, from motion capture, or the like. In some embodiments, the training data may include one or more training poses for the surface of the computer-generate face. Each training pose may provide one or more examples of a behavior expected of the surface of the computer-generate face or of a portion of the surface of the computer-generated face.”) 16. As per claim 31, Tena discloses: The method of claim 22 wherein the combined matrix decomposition comprises: a combined basis matrix having dimensionality [q, 3(m+p)] where q is a number of blendshapes for the combined matrix decomposition; (Tena, col. 14, lines 32-41, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face.”) a combined mean vector having dimensionality 3(m+p). (Tena, col. 18, lines 51-57, “To obtain a measure of how each vertex correlates with each other rather than the correlation between the x-y-z coordinates, three N×N sub-matrices containing only the correlation of the x, y, and z coordinates respectively are created. The mean of the three submatrices, C, is computed to obtain a metric that measures the degree at which vertices move in the same direction.”) 17. As per claim 43, Tena discloses: A system comprising a processor configured, by suitable programming, to perform the method of claim 22. (Tena, col. 25, lines 46-53, “The logic may be stored in or on a machine-accessible memory, a machine-readable article, a tangible computer-readable medium, a computer-readable storage medium, or other computer/machine-readable media as a set of instructions adapted to direct a central processing unit (CPU or processor) of a logic machine to perform a set of steps that may be disclosed in various embodiments of an invention presented within this disclosure.”) 18. As per claim 44, Tena discloses: A computer program product comprising a non-transitory medium which carries a set of computer-readable instructions, which, when executed by a data processor, cause the data processor to execute the method of claim 22. (Tena, col. 25, lines 46-53, “The logic may be stored in or on a machine-accessible memory, a machine-readable article, a tangible computer-readable medium, a computer-readable storage medium, or other computer/machine-readable media as a set of instructions adapted to direct a central processing unit (CPU or processor) of a logic machine to perform a set of steps that may be disclosed in various embodiments of an invention presented within this disclosure.”) Claim Rejections - 35 USC § 103 19. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 20. Claims 32-39 are rejected under 35 U.S.C. 103 as being unpatentable over Tena et al. (US-8922553-B1, hereinafter "Tena") in view of Li et al. (US-2015/0084950-A1, hereinafter "Li"). 21. As per claim 32, Tena discloses: The method of claim 31 wherein generating the secondary facial component model based on the combined matrix decomposition comprises generating, from the combined matrix decomposition: (See rejection for claim 22.) [[a combined primary subset basis matrix having dimensionality [q, 3p] by extracting 3p vectors of length q (e.g. 3p columns) from the combined basis matrix which correspond to the subset p of primary face vertices; and]] a combined primary subset mean vector having dimensionality 3p by extracting 3p elements from the combined mean vector which correspond to the subset p of primary face vertices. (Tena, col. 18, lines 48-60, “The data's normalized correlation matrix is computed. The correlation matrix is of dimensionality 3N×3N, N being the number of vertices, and expresses the correlations between each coordinate of each vertex. To obtain a measure of how each vertex correlates with each other rather than the correlation between the x-y-z coordinates, three N×N sub-matrices containing only the correlation of the x, y, and z coordinates respectively are created. The mean of the three submatrices, C, is computed to obtain a metric that measures the degree at which vertices move in the same direction. Vertices in the same region should not only be correlated, but also close to each other on the face surface.”) 22. Tena doesn't explicitly disclose but Li discloses: a combined primary subset basis matrix having dimensionality [q, 3p] by extracting 3p vectors of length q (e.g. 3p columns) from the combined basis matrix which correspond to the subset p of primary face vertices; and (Li, [0068], “In order to solve for blendshape coefficients using v4, mapping back to blendshape space is needed since the adaptive tracking model lies in the PCA space. In some embodiments, example-based facial rigging may be used. As a result, the mesh v4 may be mapped back to the vector v1, but with updated blendshape coefficients x. The extracted blendshape coefficients x may then be transferred to a compatible blendshape model of a target character (e.g., character 328) for retargeting. FIG. 9 shows the results of transferring blendshape coefficients for retargeting.” and [0086], “At 512, the process 500 includes generating an animation of the subject using the refined mesh. In some embodiments, the process 500 further includes applying the refined mesh to an animation of a target character. For example, as described above with respect to FIG. 3, output animation may include using blendshape coefficients for expression retargeting to a character. More expressive retargeting may be obtained by re-solving for blendshape coefficients using resulting final output mesh vertices, as compared to using only initial blendshape coefficients. A mapping back to the blendshape space may be performed since the adaptive tracking model lies in the PCA space. In some embodiments, an example-based facial rigging may be used to map to the blendshape space. The extracted blendshape coefficients from the vertices of the adaptive model may then be transferred to a compatible blendshape model of a target character for retargeting.”) 23. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 31 of Tena to include the disclosure of a combined primary subset basis matrix having dimensionality [q, 3p] by extracting 3p vectors of length q (e.g. 3p columns) from the combined basis matrix which correspond to the subset p of primary face vertices of Li. The motivation for this modification could have been to extract details about the blendshape so that it can be generalized and utilized on other face head models. 24. As per claim 33, Tena in view of Li discloses: The method of claim 32 wherein generating the secondary facial component model based on the combined matrix decomposition comprises generating, from the combined matrix decomposition: (See rejection for claim 22.) a secondary facial component basis matrix having dimensionality [q, 3m] by extracting 3m vectors of length q (e.g. 3m columns) from the combined basis matrix which correspond to the m secondary facial component vertices; and (Li, [0068], “In order to solve for blendshape coefficients using v4, mapping back to blendshape space is needed since the adaptive tracking model lies in the PCA space. In some embodiments, example-based facial rigging may be used. As a result, the mesh v4 may be mapped back to the vector v1, but with updated blendshape coefficients x. The extracted blendshape coefficients x may then be transferred to a compatible blendshape model of a target character (e.g., character 328) for retargeting. FIG. 9 shows the results of transferring blendshape coefficients for retargeting.” and [0086], “At 512, the process 500 includes generating an animation of the subject using the refined mesh. In some embodiments, the process 500 further includes applying the refined mesh to an animation of a target character. For example, as described above with respect to FIG. 3, output animation may include using blendshape coefficients for expression retargeting to a character. More expressive retargeting may be obtained by re-solving for blendshape coefficients using resulting final output mesh vertices, as compared to using only initial blendshape coefficients. A mapping back to the blendshape space may be performed since the adaptive tracking model lies in the PCA space. In some embodiments, an example-based facial rigging may be used to map to the blendshape space. The extracted blendshape coefficients from the vertices of the adaptive model may then be transferred to a compatible blendshape model of a target character for retargeting.”) a secondary facial component mean vector having dimensionality 3m by extracting 3m elements from the combined mean vector which correspond to the m secondary facial component vertices. (Tena, col. 18, lines 48-60, “The data's normalized correlation matrix is computed. The correlation matrix is of dimensionality 3N×3N, N being the number of vertices, and expresses the correlations between each coordinate of each vertex. To obtain a measure of how each vertex correlates with each other rather than the correlation between the x-y-z coordinates, three N×N sub-matrices containing only the correlation of the x, y, and z coordinates respectively are created. The mean of the three submatrices, C, is computed to obtain a metric that measures the degree at which vertices move in the same direction. Vertices in the same region should not only be correlated, but also close to each other on the face surface.”) 25. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 32 of Tena to include the disclosure of a secondary facial component basis matrix having dimensionality [q, 3m] by extracting 3m vectors of length q (e.g. 3m columns) from the combined basis matrix which correspond to the m secondary facial component vertices of Li. The motivation for this modification could have been to extract details about the blendshape so that it can be generalized and utilized on other face head models. 26. As per claim 34, Tena in view of Li discloses: The method of claim 33 wherein generating the secondary facial component model based on the combined matrix decomposition comprises: (See rejection for claim 22.) generating a projection matrix P having dimensionality [q, 3p] based on the combined primary subset basis matrix, wherein the projection matrix P will project a vector of 3p positions or offsets of the subset p of primary face vertices into a corresponding set of weights for the combined primary subset basis matrix; and (Li, [0009], “The method may further include refining the mesh by applying a deformation to the mesh and projecting the deformed mesh to a linear subspace, and generating an animation of the subject using the refined mesh.” and [0054], “The deformation stage of step 310 is followed by the subspace projection stage, in which the deformed mesh is projected onto the continuously improving linear adaptive PCA model 316 in a linear adaptive PCA subspace. The deformed mesh is projected onto a linear subspace so that the mesh is made linear, which allows it to be used as an input to continuously train the linear adaptive PCA tracking model 316.”) generating a weight-conversion matrix C that forms part of the secondary facial component model based at least in part on the projection matrix P. (Li, [0051]-[0053], “Cotangent weights are defined with respect to the neutral mesh for the Laplacian smoothing terms: ... Equation (3), (5), and (6) can then be stacked into a single over constrained linear system: ... where Q is a 3F×3N matrix stacked from the projection matrix P from Equation (4), I denotes a 3N×3N identity matrix, w1 is the weight for the point-to-point depth map constraints (e.g., w1=0.1), w2 is the weight for the Laplacian regularization constraint (e.g., w2=100), and a contains all the constant terms from the constraints. The above system can be rewritten as GKΔv1=a, where the least-square solution can be readily computed using a Moore-Penrose pseudoinverse ... As a result, using the above equations, the vertices from the mesh v1 are displaced in order to create the mesh ...”) 27. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 33 of Tena to include the disclosure of generating a projection matrix P having dimensionality [q, 3p] based on the combined primary subset basis matrix, wherein the projection matrix P will project a vector of 3p positions or offsets of the subset p of primary face vertices into a corresponding set of weights for the combined primary subset basis matrix and generating a weight-conversion matrix C that forms part of the secondary facial component model based at least in part on the projection matrix P of Li. The motivation for this modification could have been to project the deformed mesh to a linear subspace so that it can be used to train a linear adaptive PCA tracking model. The weight-conversion matrix can be used as a method to influence facial components so they can become more or less pronounced, possibly creating new blendshapes. 28. As per claim 35, Tena in view of Li discloses: The method of claim 34 wherein generating the projection matrix P comprises selecting the projection matrix P that will minimize an error associated with converting the weights for the combined primary subset basis matrix back to the 3p positions or offsets of (e.g. reconstructing) the subset p of primary face vertices using the combined primary subset basis matrix. (Tena, col. 14, lines 32-45, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face. The model parameters, c, that best describe the input data v, in a least squares sense can be found by minimizing equation (2): E=∥v−Bc∥22  (2)”) 29. As per claim 36, Tena in view of Li discloses: The method of claim 35 wherein generating the projection matrix P comprises selecting the projection matrix P that will minimize a least squares error associated with converting the weights for the combined primary subset basis matrix back to the 3p positions or offsets of (e.g. recontrusting) the subset of p primary face vertices using the combined primary subset basis matrix. (Tena, col. 14, lines 32-45, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-generated face). In one specific case of 3D faces, for example, v can be a column vector containing the (x, y, z) spatial coordinates of each of the vertices in a mesh (e.g., mesh 200) that represents the computer-generated face. The model parameters, c, that best describe the input data v, in a least squares sense can be found by minimizing equation (2): E=∥v−Bc∥22  (2)”) 30. As per claim 37, Tena in view of Li discloses: The method of claim 35 wherein generating the projection matrix P comprises calculating the projection matrix P according to P=(AT A)-1 AT where AT is the combined primary subset basis matrix. (Tena, col. 14, lines 32-54, “Consider a linear model according to equation (1): v=Bc (1) where B are the model's linear basis, c are the model parameters and v is the data to be modeled (e.g., one or more manipulations of the surface of the computer-gen
Read full office action

Prosecution Timeline

Jan 11, 2024
Application Filed
Dec 09, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530842
AIRBORNE LiDAR POINT CLOUD FILTERING METHOD DEVICE BASED ON SUPER-VOXEL GROUND SALIENCY
2y 5m to grant Granted Jan 20, 2026
Patent 12499800
IN-VEHICLE DISPLAY DEVICE
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
1y 11m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month