Prosecution Insights
Last updated: April 19, 2026
Application No. 18/312,102

SYNTHETIC DATA GENERATION USING MORPHABLE MODELS WITH IDENTITY AND EXPRESSION EMBEDDINGS

Final Rejection §103
Filed
May 04, 2023
Examiner
STATZ, BENJAMIN TOM
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
65.2%
+25.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
DETAILED ACTION This office action is responsive to the amendment/response filed on 11/03/2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see pg. 8-9, filed 11/03/2025, with respect to the rejection of claims 1, 5, 8, and 19 under 35 U.S.C. 112 have been fully considered and are persuasive. The rejection of claims 1, 5, 8, and 19 under 35 U.S.C. 112 has been withdrawn. Applicant’s arguments, filed 11/03/2025, with respect to the rejection(s) of independent claims 1, 8, and 15 under 35 U.S.C. 103 have been fully considered and are persuasive. The provided prior art references do not teach or suggest the amended limitation in claim 1 involving “a randomly initialized, unlearned shape embedding corresponding to a three-dimensional (3D) coordinate of the geometry embedding” or the similar limitations in claim 8 and 15. Yenamandra teaches a single, fixed initialized shape for the Reference Shape Network, but it is neither “randomly initialized” nor “unlearned”. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Tilke et al. (US 20240331363 A1). Tilke et al. teaches the random initialization of neural network inputs; it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the teachings of Tilke et al. to the inventions of the provided prior art in order to teach the claimed limitations. The new ground of rejection in view of Tilke et al. also applies to the dependent claims as discussed in applicant’s arguments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-8 and 10-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yenamandra et al. (i3DMM: Deep Implicit 3D Morphable Model of Human Heads, 2021, hereinafter "Yenamandra") in view of Xiang et al. (NeuTex: Neural Texture Mapping for Volumetric Neural Rendering, 2021, hereinafter "Xiang") and Tilke et al. (US 20240331363 A1, hereinafter "Tilke"). Regarding claim 1, Yenamandra teaches: A computer-implemented method, comprising: determining, using a first multilayer perceptron (MLP) (fig. 5, Reference Shape Network) and based on a first input set comprising a geometry embedding (initial input to Shape Deformation Network includes shape latent code; output of Shape Deformation Network is used as input to Reference Shape Network) and an initialized shape embedding corresponding to a three-dimensional (3D) coordinate of the geometry embedding (pg. 4 “Our Reference Shape Network encodes a single reference shape, such that all individual head shapes can be obtained by deforming this shape.”; pg. 6 “We initialize RefNet by pretraining it using only one mouth-open (top row, third from left in Fig. 3) training scan.”), a signed distance field (SDF) corresponding to a position (fig. 5 output of Reference Shape Network is an SDF value; caption “The input of the network is a 3D query point, and the output is a signed distance value along with the corresponding color”), and a first set of parameters (fig. 5 caption “We learn weights of three network components…”); determining, using a second MLP (fig. 5, Shape Deformation Network) and based on the first input set (Shape Deformation Network and Reference Shape Network share the same initial inputs), a second set of parameters (fig. 5 caption “We learn weights of three network components…”); generating, using a third MLP (fig. 5, Color Network) and based on a second input set (input to Color Network includes color latent code as well as the output of Shape Deformation network, which includes shape latent code and 3D position), a color mapping for a position corresponding to an object (fig. 5 output of Color Network is a color; caption “The input of the network is a 3D query point, and the output is a signed distance value along with the corresponding color”) and a third set of parameters (fig. 5 caption “We learn weights of three network components…”); and rendering, based at least in part, on the first set of parameters, the second set of parameters, and the third set of parameters (fig. 5, as previously cited), a 3D representation of the object (fig. 1). Yenamandra does not explicitly teach the use of a UV mapping and a UVW mapping as additional inputs to the second and third MLP, respectively. Xiang teaches a similar structure as shown in fig. 2, with separate MLP branches for shape and texture. MLP (4) Fσ corresponds to the claimed first MLP, MLP (2) Fuv-1 corresponds to the claimed second MLP, and MLP (3) Ftex corresponds to the claimed third MLP. Xiang teaches a UV mapping as input to the second MLP (fig. 2 shows (u, v) input to MLP (2) Fuv-1; caption “We also train an inverse mapping MLP (2) Fuv-1 that maps UVs back to 3D points”. Xiang teaches a UVW mapping as input to the third MLP (fig. 2 shows (u, v) input to MLP (3) Ftex; pg. 4 col. 2 section 3.3 “Texture space and inverse texture mapping”: “As described in Eqn. 5, our texture space is parameterized by a 2D UV coordinate u = (u, v). We use a 2D unit sphere for our results, where u is interpreted as a point on the unit sphere.”, where a UV mapping on a unit sphere is equivalent to a UVW mapping with the W coordinate fixed at a value of 1). Yenamandra and Xiang are analogous to the claimed invention because they are in the same field of neural rendering. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra with the teachings of Xiang to represent the input texture/color data as a UV/UVW mapping. The motivation would have been to make editing input textures easier for a user (Xiang pg. 1 col. 1, Abstract). The combination of Yenamandra in view of Xiang does not explicitly teach that the initialized shape embedding is unlearned and randomly initialized. Tilke teaches the random initialization of neural network inputs ([0028] “These latent vectors can be optimized from random initializations via the gradient backpropagation method.”; by definition this does not require learning.) Tilke and the combination of Yenamandra in view of Xiang are analogous to the claimed invention because they pertain to the same issue of training a neural network. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang with the teachings of Tilke to randomly initialize the shape embedding. The motivation would have been to avoid bias in the output that may result from a single, fixed initialization. Regarding claim 2, the combination of Yenamandra in view of Xiang and Tilke teaches: The computer-implemented method of claim 1, wherein the first input set includes a plurality of geometry embeddings, a plurality of expression embeddings, and the position (Yenamandra fig. 5, input to Shape Deformation Network includes shape latent code and (x, y, z) position; pg. 4 col. 1 explains that the shape latent code includes geometry and expression: “Existing approaches use a single object latent code to describe the shape. In contrast, we design several separate latent spaces for our objects in order to learn a semantically disentangled model. We use two separate latent vector spaces for geometry and color, zgeo and zcol, respectively. The geometry space includes three code vectors for identity, expression, and hairstyle…”). Regarding claim 3, the combination of Yenamandra in view of Xiang and Tilke teaches: The computer-implemented method of claim 2, wherein the position is 3D position (Yenamandra fig. 5 shows (x, y, z) input to Shape Deformation Network; caption “The input of the network is a 3D query point”). Regarding claim 4, the combination of Yenamandra in view of Xiang and Tilke teaches: The computer-implemented method of claim 2, wherein the plurality of geometry embeddings correspond to a 3D mesh (Yenamandra pg. 3 “Data Acquisition” describes the creation of 3D meshes for training data, pg. 4 “Latent Codes and Disentanglement” describes the generation of latent codes based on the training data). Regarding claim 5, the combination of Yenamandra in view of Xiang and Tilke teaches: The computer-implemented method of claim 1, wherein the second input set includes a plurality of expression embeddings, the position, and a plurality of color embeddings (Yenamandra fig. 5, input to Color Network includes color latent code as well as the results of the input to Shape Deformation Network, which includes expression latent code and (x, y, z) position). Regarding claim 6, the combination of Yenamandra in view of Xiang and Tilke teaches: The computer-implemented method of claim 5, wherein the UV mapping maps the position to a sphere and the sphere to a two-dimensional (2D) space (Xiang pg. 4 col. 2 “While any continuous 2D topology can be used for the UV space in our network, we use a 2D unit sphere for most results, where u is interpreted as a point on the unit sphere” – the 3D sphere is reduced to two dimensions due to the third (radial) coordinate being fixed to a value of 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang and Tilke with the additional teachings of Xiang to use a spherical UV mapping. The motivation would have been that “It makes our method work best for objects with genus 0” (Xiang pg. 4 col. 2) – in simpler terms, objects with no holes, such as human heads. Regarding claim 7, the combination of Yenamandra in view of Xiang and Tilke teaches: The computer-implemented method of claim 1, further comprising: updating the first input set using gradient backpropagation; and updating the second input set using gradient backpropagation (Tilke [0028] “These latent vectors can be optimized from random initializations via the gradient backpropagation method.”). Tilke and the combination of Yenamandra in view of Xiang are analogous to the claimed invention because they pertain to the same issue of training a neural network. Additionally, gradient backpropagation is a well-known technique in the field and a standard method for training neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang with the teachings of Tilke to optimize parameters using gradient backpropagation. The motivation would have been to perform the typical functionality of a neural network. Regarding claim 8, Yenamandra teaches: A processor, comprising: one or more circuits to: receive a first input set, the first input set including a geometry embedding, an expression embedding (fig. 5 shows the first set of latent codes (geometry, which includes expression) being received by the Shape Deformation Network; pg. 4 col. 1 “We use two separate latent vector spaces for geometry and color, zgeo and zcol, respectively. The geometry space includes three code vectors for identity, expression, and hairstyle, and the color space includes two code vectors for identity and hairstyle… During training, the number of different identity code vectors is equal to the number of training identities, 58. The number of different expression vectors is fixed to 10…”), and a position (fig. 5 shows (x, y, z) input to Shape Deformation Network; caption “The input of the network is a 3D query point”); determine, based on the first input set, a signed distance field (SDF) corresponding to the position (fig. 5 output from Reference Shape Network; caption “The input of the network is a 3D query point, and the output is a signed distance value along with the corresponding color”, SDF value is generated based on shape latent codes which were initially input to Shape Deformation Network), the SDF determined using a shape embedding corresponding to a three-dimensional (3D) coordinate of the geometry embedding (pg. 4 “Our Reference Shape Network encodes a single reference shape, such that all individual head shapes can be obtained by deforming this shape.”; pg. 6 “We initialize RefNet by pretraining it using only one mouth-open (top row, third from left in Fig. 3) training scan.”); receive a second input set, including a color embedding, the expression embedding (fig. 5 shows the second set of latent codes (color) being received by the Color Network along with the output from the Shape Deformation Network derived from the first set of latent codes, which included the expression code; pg. 4 col. 1 “We use two separate latent vector spaces for geometry and color, zgeo and zcol, respectively. The geometry space includes three code vectors for identity, expression, and hairstyle, and the color space includes two code vectors for identity and hairstyle.… During training, the number of different identity code vectors is equal to the number of training identities, 58. The number of different expression vectors is fixed to 10…); determine, based on the second input set, a color mapping corresponding to the position (fig. 5 output from Color Network; caption “The input of the network is a 3D query point, and the output is a signed distance value along with the corresponding color”, color value is generated based on color latent codes); generate a first set of parameters for a plurality of neural networks and a second set of parameters for the geometry embedding, the expression embedding, and the color embedding (fig. 5 “We learn weights of three network components, a Shape Deformation component, a Reference Shape component, and a Color component. Moreover, the latent codes for each object are also optimized for.”, pg. 4 col. 1 “We use two separate latent vector spaces for geometry and color, zgeo and zcol, respectively. The geometry space includes three code vectors for identity, expression, and hairstyle, and the color space includes two code vectors for identity and hairstyle.”); and render, using a neural network incorporating the first set of parameters and the second set of parameters (fig. 5, as previously cited), a 3D facial representation (fig. 1). Yenamandra does not explicitly teach: to determine, based on the first input set, a UV mapping for the position, to receive a second input set including the UV mapping; or to render a 3D facial representation based on an input image. Xiang teaches the concept of UV mapping, specifically to determine, based on the first input set, a UV mapping for the position (fig. 2 “we use a texture mapping MLP (1) Fσ to map 3D points to 2D texture UVs”), and to receive a second input set including the UV mapping (fig. 5, 2D texture map (u, v) is received as input by texture network Ftex); as well as to render a 3D facial representation based on an input image (Xiang pg. 6 col. 1 “We demonstrate our method on real scenes from different sources, including five scenes from the DTU dataset [1] (Fig. 1, 4, 5), two scenes from Neural Reflectance Fields [4] obtained from the authors (Fig. 6), and three scenes captured by ourselves (Fig. 5). Each DTU scene contains either 49 or 64 input images from multiple viewpoints. Each scene from [4] contains about 300 images. Our own scenes each contain about 100 images.”). Yenamandra and Xiang are both analogous to the claimed invention because they are in the same field of 3D neural rendering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra to incorporate the teachings of Xiang to represent the texture/color aspect as a UV mapping and to use images as input. The motivation would have been to make editing textures easier for a user (Xiang pg. 1 col. 1, Abstract), and to simplify the data acquisition process, respectively. The combination of Yenamandra in view of Xiang does not explicitly teach that the shape embedding is randomly initialized and unlearned. Tilke teaches the random initialization of neural network inputs ([0028] “These latent vectors can be optimized from random initializations via the gradient backpropagation method.”; by definition this does not require learning.) Tilke and the combination of Yenamandra in view of Xiang are analogous to the claimed invention because they pertain to the same issue of training a neural network. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang with the teachings of Tilke to randomly initialize the shape embedding. The motivation would have been to avoid bias in the output that may result from a single, fixed initialization. Regarding claim 10, the combination of Yenamandra in view of Xiang and Tilke teaches: The processor of claim 8, wherein the position is a 3D position (Yenamandra fig. 5, pg. 3 section 3.3 “Training”: “…x ∈ R3 is the query point…) based on a facial mesh (Yenamandra pg. 3 section 3.3 “Training”: “We require (x, s, c)-triplets (query point, signed distance value, color) for training. We use a combination of two strategies for sampling these triplets. First, we sample points on the mesh surface…”, pg. 3 section 3.2 “Data Acquisition” describes the creation of 3D facial meshes for training data). Regarding claim 11, the combination of Yenamandra in view of Xiang and Tilke teaches: The processor of claim 8, wherein the first set of parameters and the second set of parameters are generated using gradient propagation (Tilke [0028] “These latent vectors can be optimized from random initializations via the gradient backpropagation method.”). Tilke and the combination of Yenamandra in view of Xiang are analogous to the claimed invention because they pertain to the same issue of training a neural network. Additionally, gradient backpropagation is a well-known technique in the field and a standard method for training neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang with the teachings of Tilke to optimize parameters using gradient backpropagation. The motivation would have been to perform the typical functionality of a neural network. Regarding claim 12, the combination of Yenamandra in view of Xiang and Tilke teaches: The processor of claim 8, wherein the one or more circuits are further to initialize initial values of the first input set to random numbers (Tilke [0028] “These latent vectors can be optimized from random initializations via the gradient backpropagation method.”). Tilke and the combination of Yenamandra in view of Xiang are analogous to the claimed invention because they pertain to the same issue of training a neural network. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang with the teachings of Tilke to randomly initialize network inputs. The motivation would have been to avoid bias in the output that may result from a single, fixed initialization. Regarding claim 13, the combination of Yenamandra in view of Xiang and Tilke teaches: The processor of claim 8, wherein the one or more circuits are further to: receive training data corresponding to image data of a face and a representative expression (Yenamandra pg. 3 col. 1 “Data Acquisition”: “For training, we have scanned 64 subjects… For each subject we have recorded 10 facial expressions… including neutral expressions…”); convert the image data of the face into the geometry embedding (Yenamandra pg. 3 col. 2, “We use an autodecoder network architecture [36], where the weights of the network θ and the input latent codes z for all shapes are learned jointly”, “Mesh Sampling” section describes the collection of (x, s, c)-triplets (query point, signed distance value, color) for training); and convert the representative expression into the expression embedding (Yenamandra pg. 3 col. 2, “We use an autodecoder network architecture [36], where the weights of the network θ and the input latent codes z for all shapes are learned jointly”, pg. 4 “…for each expression and hairstyle, the same variables zgeoEx, zgeoH and zcolH are used across all identities. By doing so, we are able to learn disentangled latent variables without imposing any explicit constraints”). Regarding claim 14, the combination of Yenamandra in view of Xiang and Tilke teaches: The processor of claim 8, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output (Yenamandra fig. 10); a system for performing deep learning operations (Xiang pg. 2 col. 2 “Recently, deep learning-based methods have proposed to ameliorate or completely bypass mesh reconstruction to achieve realistic neural renderings of real scenes…”); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content (Xiang pg. 1 col. 1 “One crucial goal of this task is to avoid the tedious manual 3D modeling process and directly provide a renderable and editable 3D model that can be used for realistic rendering in applications, like e-commerce, VR and AR”); a system for generating or presenting augmented reality (AR) content (Xiang pg. 1 col. 1 “One crucial goal of this task is to avoid the tedious manual 3D modeling process and directly provide a renderable and editable 3D model that can be used for realistic rendering in applications, like e-commerce, VR and AR”); a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system for performing operations for a conversational AI application; a system for performing operations for a generative AI application; a system for performing operations using a language model; a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for synthetic data generation (Xiang pg. 2 col. 2 “our approach explicitly extracts surface appearance as view-independent textures, just like standard textures used with meshes, allowing for broad texture editing applications in 3D modeling and content generation”); a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang and Tilke to incorporate the additional teachings of Xiang to apply it toward any of the listed practical applications. The motivation would have been to push the invention beyond experimentation and closer to being a commercial product. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yenamandra (i3DMM: Deep Implicit 3D Morphable Model of Human Heads) in view of Xiang (NeuTex: Neural Texture Mapping for Volumetric Neural Rendering) and Tilke (US 20240331363 A1) as applied to claim 8 above, and further in view of Poursaeed et al. (Coupling Explicit and Implicit Surface Representations for Generative 3D Modeling, 2020, hereinafter "Poursaeed"). Regarding claim 9, the combination of Yenamandra in view of Xiang and Tilke teaches: The processor of claim 8, wherein the one or more circuits are further to: generate a 3D vector based at least on the UV mapping (Xiang pg. 2 col. 1 “We train an additional inverse mapping MLP to map the 2D UV coordinates of these high-contribution points back to their 3D locations. Introducing this inverse-mapping network forces our model to learn a consistent mapping (similar to a one-to-one correspondence) between the 2D UV coordinates and the 3D points on the object surface”, also see fig. 2). The combination of Yenamandra in view of Xiang and Tilke does not explicitly teach: replace the UV mapping with the 3D vector in the second input set. Poursaeed teaches: generate a 3D vector based at least on the UV mapping, and replace the UV mapping with the 3D vector in the second input set (pg. 1 “The explicit surface representation defines the surface as an atlas – a collection of charts, which are maps from 2D to 3D, {fi : Ωi ⊂ R2 → R3}, with each chart mapping a 2D patch Ωi into a part of the 3D surface.”, fig. 1 shows the model architecture in which the atlas output is used as input for another network). Poursaeed and the combination of Yenamandra in view of Xiang and Tilke are analogous to the claimed invention because they are in the same field of neural rendering. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang and Tilke to incorporate the teachings of Poursaeed to represent the 2D texture map in 3D when using the MLPs to generate a 3D model. The motivation would have been to act as an alternate method of avoiding “limiting network’s ability to focus its capacity on high-entropy regions” (Poursaeed pg. 1) while still allowing the 3D surface to be defined explicitly. Claim(s) 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yenamandra (i3DMM: Deep Implicit 3D Morphable Model of Human Heads) in view of Tilke (US 20240331363 A1). Regarding claim 15, Yenamandra teaches: A system, comprising: one or more processing units to determine a first set of weights (pg. 3 col. 2 section 3.3 “Training”) for a plurality of neural networks (fig. 5 shows the arrangement of 3 neural networks) and a second set of weights for a plurality of input embeddings (pg. 4 col. 1 “…latent codes for each object (head scan) are also learned during training”) based, at least, on a set of labeled training data corresponding to a plurality of geometry embeddings with an associated plurality of expression embeddings (pg. 4 col. 1 “Existing approaches use a single object latent code to describe the shape. In contrast, we design several separate latent spaces for our objects in order to learn a semantically disentangled model. We use two separate latent vector spaces for geometry and color, zgeo and zcol, respectively. The geometry space includes three code vectors for identity, expression, and hairstyle…”), wherein both of the first set of weights and the second set of weights are optimized… (pg. 5 col. 2 “Given N batches with K objects per batch, we optimize for the network weights and the latent vectors by solving the optimization problem…) at a respective position and a signed distance field (SDF) for the respective position (pg. 5 col. 1 “Loss Functions” describes the function being optimized, which includes the query point x and the signed distance value at x, sgt(x)). Yenamandra does not explicitly teach that the weights are optimized through gradient propagation, or that the shape embedding is randomly initialized and unlearned. Tilke teaches the optimization of a neural network through gradient propagation and a neural network input being randomly initialized ([0028] “These latent vectors can be optimized from random initializations via the gradient backpropagation method.”; by definition this does not require learning). Tilke and the combination of Yenamandra in view of Xiang are analogous to the claimed invention because they pertain to the same issue of training a neural network. Additionally, gradient backpropagation is a well-known technique in the field and a standard method for training neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Xiang with the teachings of Tilke to optimize parameters using gradient backpropagation and to randomly initialize the shape embedding. The motivations would have been to perform the typical functionality of a neural network, and to avoid bias in the output that may result from a single, fixed initialization. Regarding claim 16, the combination of Yenamandra in view of Tilke teaches: The system of claim 15, wherein the position is a 3D position (Yenamandra fig. 5 caption, “The input of the network is a 3D query point…”). Regarding claim 17, the combination of Yenamandra in view of Tilke teaches: The system of claim 15, wherein the geometry embeddings correspond to a 3D mesh (Yenamandra pg. 3 “Data Acquisition” describes the creation of 3D meshes for training data, pg. 4 “Latent Codes and Disentanglement” describes the generation of latent codes based on the training data). Regarding claim 18, the combination of Yenamandra in view of Tilke teaches: The system of claim 15, wherein the associated plurality of expression embeddings correspond to one or more facial expressions (Yenamandra fig. 3 shows facial expressions used for training, pg. 4 col. 1 “The number of different expression vectors is fixed to 10 (cf. Fig. 3 for the training expressions)”). Regarding claim 19, the combination of Yenamandra in view of Tilke teaches: The system of claim 15, wherein the plurality of input embeddings further comprises a plurality of color embeddings (Yenamandra pg. 4 col. 1, “In contrast, we design several separate latent spaces for our objects in order to learn a semantically disentangled model. We use two separate latent vector spaces for geometry and color… the color space includes two code vectors for identity and hairstyle… During training, the number of different identity code vectors is equal to the number of training identities, 58”). Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yenamandra (i3DMM: Deep Implicit 3D Morphable Model of Human Heads) in view of Tilke (US 20240331363 A1) as applied to claim 15 above, and further in view of Xiang (NeuTex: Neural Texture Mapping for Volumetric Neural Rendering). Regarding claim 20, the combination of Yenamandra in view of Tilke teaches: The system of claim 15, but does not explicitly teach: wherein the one or more processing units are further to determine a UV mapping for the plurality of input embeddings at the respective positions. Xiang teaches: wherein the one or more processing units are further to determine a UV mapping for the plurality of input embeddings at the respective positions (pg. 2 col. 1 “In particular, we train a texture mapping MLP to regress a 2D UV coordinate at every 3D point in the scene, and use another MLP to regress radiance in the 2D texture space for any UV location. Thus, given any 3D shading point in ray marching, our network can obtain its radiance by sampling the reconstructed neural texture at its mapped UV location.”, also see Fig. 2). Xiang and the combination of Yenamandra in view of Tilke are analogous to the claimed invention because they are in the same field of 3D neural rendering. Due to its representation of a given scene as a radiance field, the invention of Xiang does not use the same style of input embeddings as Yenamandra and the claimed invention, but it disentangles geometry and texture in the same manner. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yenamandra in view of Zhuang to incorporate the teachings of Xiang to represent the texture/color aspect as a UV mapping. The motivation would have been to make editing textures easier for a user (Xiang pg. 1 col. 1, Abstract). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN STATZ whose telephone number is (571)272-6654. The examiner can normally be reached Mon-Fri 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN TOM STATZ/Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 04, 2023
Application Filed
Apr 30, 2025
Non-Final Rejection — §103
Oct 29, 2025
Applicant Interview (Telephonic)
Oct 29, 2025
Examiner Interview Summary
Nov 03, 2025
Response Filed
Jan 25, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month