Prosecution Insights
Last updated: April 19, 2026
Application No. 18/433,781

3D DIGITAL VIRTUAL CHARACTER GENERATION VIA A TWO-STAGE PROCESS

Final Rejection §103
Filed
Feb 06, 2024
Examiner
PROVIDENCE, VINCENT ALEXANDER
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Zoom Video Communications, Inc.
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
15 granted / 18 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
82.4%
+42.4% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
0.9%
-39.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed December 11, 2025 has been entered. Claims 1-20 are pending in the application. Applicant’s amendments to the Claims 1, 4, 5, 8, 11, 12, 15, 18, and 19 have overcome the rejections previously set forth in the Non-Final Office Action mailed August 13, 2025. A second search has been performed to address the material amended in the aforementioned claims. Newly found references Wood (NPL: A 3D morphable eye region model for gaze estimation), Carr (US 20130076619 A1) and Satio (NPL: Smooth Contact-Aware Facial Blendshapes Transfer; from Applicant’s IDS) were used for the newly amended claim limitations. Response to Arguments The Examiner appreciates Applicant’s thorough review of the previous Non-Final Action. Applicant’s arguments with respect to the application of Danieau to claims 1, 8, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sumner et al.: (NPL: Deformation Transfer for Triangle Meshes, from applicant’s IDS) in view of Wood (NPL: A 3D morphable eye region model for gaze estimation). Regarding claim 1: Sumner teaches: A computer implemented method (Sumner: Mesh deformation plays a central role in computer modeling and animation, Pg. 1, Section 1: Introduction, par. 1) for generating a virtual character, the method comprising: accessing a source human face model, a target human face model, and a source virtual character face model (Sumner: In Figure 7, facial expressions of a real person, acquired with a 3D scanning system, are transfered onto a digital character, Pg. 5, Section 6: Results, par. 4; see Note 1A); deforming the source virtual character face model based on the source human face model and the target human face model to generate a target virtual character face model (see Note 1A), wherein: the source virtual character face model comprises a face region, wherein the face region comprises multiple facial features (see Note 1B); and Note 1A: Sumner teaches in Figure 7 (Pg. 7) that a human face model may be accessed and deformed. A reproduced figure below shows which faces are analogous to the source human face model, target human face model, source virtual character face model, and target virtual character face model. Note that the human face model is deformed with an expression, which is in turn applied to the virtual character face model via “deformation transfer”, as described by Sumner on Pg. 2, Section 2: Background, par. 4: “The concept of deformation transfer can be posed as an analogy: given a pair of source meshes, S and S0, and a target mesh T, generate a new mesh T0 such that the relationship between T and T0 is analogous to the relationship between S and S0.” That is, when S is the source human face model, S0 is the target human face model, T is the source virtual character face model, and T0 is the target virtual character face model, Sumner teaches deforming the source virtual character face model based on the source human face model and the target human face model to generate a target virtual character face model, which also necessarily requires accessing a source human face model, a target human face model, and a source virtual character face model. PNG media_image1.png 321 590 media_image1.png Greyscale Edited variant of Figure 7 of Sumner, highlighting which models correspond to the terms of claim 1 of the present application. Note 1B: Note that in the figure above, the source virtual character face model comprises multiple facial features, such as a nose, ears, and a mouth. Sumner fails to explicitly teach: wherein: the source virtual character face model comprises a face region and a non-face region, wherein the non-face region comprises interior regions corresponding to the multiple facial features; and deforming the source virtual character face model comprises: deforming the face region without deforming the non-face region to obtain a deformed face region, and deforming the non-face region without deforming the deformed face region to obtain a deformed non-face region; and rendering the target virtual character face model. Wood teaches: wherein: the source virtual character face model (Wood: a new multi-part model of the eye, Abstract) comprises a face region (Wood: morphable model of the facial eye region, Abstract) and a non-face region (Wood: as well as an anatomy-based eyeball model, Abstract), wherein the non- face region comprises interior regions corresponding to the multiple facial features (see Note 1C); and deforming the source virtual character face model comprises: deforming the face region (Wood: Fig. 5, Pg. 7; see Note 1D) without deforming the non-face region (Wood: This topology does not include the eyeball, as we wish to pose that separately to simulate its independent movement, Pg. 6, par. 1) to obtain a deformed face region, and deforming the non-face region (Wood: We combined this with an anatomy-based eyeball model that can be posed separately to simulate changes in eye gaze, Pg. 2, An eye region 3DMM, par. 1) without deforming the deformed face region (Wood: [The multi-part model] is also the first to allow independent eyeball movement, since we treat it as a separate part, Abstract) to obtain a deformed non-face region; and rendering the target virtual character face model (Wood: we iteratively render a synthetic image Isyn (Φ), compare it to Iobs using our energy function, and update Φ accordingly, Pg. 9, Section 5: Analysis-by-synthesis for gaze estimation; see also Wood: Fig. 7, Pg. 9). Note 1C: Wood teaches modelling an “eyeball 3D mesh” as shown in Fig. 6 on Pg. 8. When part of the multi-part model (previously analogized to the “source virtual character model” above), a portion of the eyeball model is not externally visible when viewing the multi-part model, i.e., there are regions of the eyeball model that are interior relative to the multi-part model. Note 1D: In Fig. 5 on Pg. 7, Wood showcases that the facial eye mesh may be deformed without deforming the eyeball mesh. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Wood with Sumner. Having the deforming the source virtual character face model comprise: deforming the face region by fixing the non-face region, and deforming the non-face region by fixing the deformed face region; and rendering the target virtual character face model, as in Wood, would benefit the Sumner teachings by enabling accurate eye depiction while also allowing free eye movement independent of the morphable model: “It is the first morphable model that accurately captures eye region shape, since it was built from high-quality head scans. It is also the first to allow independent eyeball movement, since we treat it as a separate part.” (Wood, Abstract). Regarding claim 8: Claim 8 is substantially similar to Claim 1, and is therefore rejected for similar reasons. Claim 8 contains the following notable differences: Claim 8 claims a system instead of a method. Sumner teaches a system (“Our system can transfer hand-sculpted alterations as well as deformations resulting from arbitrarily complex procedural or simulation based methods”, Pg. 1, Introduction) that is run on a computer: “Mesh deformation plays a central role in computer modeling and animation”, (Pg. 1, Introduction). A computer inherently includes a processor and memory. Therefore, it would be obvious to one of ordinary skill in the art to use Sumner’s system in tandem with “a non-transitory computer-readable medium; and a processor communicatively coupled to the non-transitory computer-readable medium, the processor configured to execute processor-executable instructions stored in the non-transitory computer-readable medium”. Regarding claim 15: Claim 15 is substantially similar to Claim 1, and is therefore rejected for similar reasons. Claim 15 contains the following notable differences: Claim 15 claims a non-transitory computer-readable medium instead of a method. Sumner teaches a system (“Our system can transfer hand-sculpted alterations as well as deformations resulting from arbitrarily complex procedural or simulation based methods”, Pg. 1, Introduction) that is run on a computer: “Mesh deformation plays a central role in computer modeling and animation”, (Pg. 1, Introduction). A computer inherently includes a processor and memory. Therefore, it would be obvious to one of ordinary skill in the art to use Sumner’s system in tandem with “a non-transitory computer-readable medium comprising processor-executable instructions”. Claims 2, 3, 9, 10, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Sumner et al.: (NPL: Deformation Transfer for Triangle Meshes, from applicant’s IDS) in view of Wood (NPL: A 3D morphable eye region model for gaze estimation), Danieau (US 20230334805 A1) and Blanz et al.: (NPL: A Morphable Model For The Synthesis Of 3D Faces). Regarding claim 2: Sumner in view of Wood teaches: The method of claim 1 (as shown above), wherein the source human model is a human base face model (see Note 2B); the source virtual character face model is a virtual character base face model (see Note 2B). Note 2B: As shown in the Figure associated with Note 1A, the source human face model and source virtual character face model are presented without the transformations shown on the target human/virtual character face models. Therefore, the source human face model and source virtual character face model are analogous to a human base face model and virtual character base face model respectively. Sumner in view of Wood fails to explicitly teach: the target human face model is a human face model generated by combining the human base face model with a human face feature base; and Danieau in view of Blanz teaches: the target human face model is a human face model generated by combining the human base face model with a human face feature base (Danieau: FIG. 6 illustrates from i) to iii) Different facial 3D meshes of a dataset, with column a) showing the original 3D meshes, column b) illustrating a naïve gradient EDFM … [0088]; Danieau: EDFM stands for “Exaggerating the Difference From the Mean” which consists in emphasizing the features that make a person unique i.e. different from the average face, [0029]; see Note 2C); and Note 2C: Figure 6 of Danieau showcases that a base human face model (the “original 3D meshes” under column a) of Figure 6, as cited above in [0088]) may be modified by a EDFM that defines features used to create deformed face geometry, similar to how the target human face model in Note 1A is deformed with an expression. However, unlike in the Figure associated with Note 1A, the faces showcased in Figure 6 retain a neutral composition that may be further augmented with an expression later: Blanz teaches that a morphable model may have both shape and expression components, as shown in Figure 7 on Pg. 7. Specifically, the face model depicted under “Texture Extraction & Facial Expression” comprises an expression while also retaining the face shape from the “Reconstruction of Shape & Texture” variant of the model. Therefore, the EDFM is analogous to a human face feature base, and the faces under columns b) through f) of Figure 6 are analogous to the target human face model. It follows that Sumner in view of Danieau and Blanz teaches that the target human face model is a human face model generated by combining the human base face model with a human face feature base. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Danieau with Sumner in view of Wood. Generating the target human face model by combining the human base face model with a human face feature base, as in Danieau, would benefit the Sumner in view of Wood teachings by enhancing the artistic inspiration of a given target model: “Since proportions are based on distance ratios, it is assumed that exaggerating the distances between the 3D points also exaggerates the proportions. Proportions are not likely to be “more normal” after exaggerating the distances. A common observation is that the caricatures are more diverse and less linear than the real faces. This superior diversity motivates the choice of taking into account the caricatures that are made by artists.” (Danieau, [0090]). Additionally, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Blanz with Sumner in view of Wood and Danieau, because Danieau cites Blanz by reference and establishes how the method of Blanz benefits the method proposed by Danieau: “Blanz and Vetter (V. Blanz and T. Vetter, “A morphable model for the synthesis of 3d faces,” 1999) learnt a Principal Component Analysis model over 200 3D textured faces. Their system allows caricature generation by increasing the distance to the statistical mean in terms of geometry and texture,” [0029]. Regarding claim 3: Sumner in view of Wood teaches: The method of claim 1 (as shown above), wherein: the target human face model is generated by combining the source human face model with a human face expression basis (Sumner: Deformation transfer is a generalization of the concept introduced by expression cloning, which transfers facial expressions from one face mesh to another, Pg. 2, Section 2: Background, par. 1; see Note 3B and Note 1A); and Note 3B: Sumner teaches that the source human model may be modified with an expression: “Deformation transfer is a generalization of the concept introduced by expression cloning, which transfers facial expressions from one face mesh to another [Noh and Neumann 2001]. In this approach, each expression is encoded with vertex displacements that define the differences between the reference face and the expression face,” (Pg. 2, Section 2: Background, par. 1). Such a modification is depicted in Figure 7 on Pg. 7. The vertex displacements taught by Sumner are analogous to a “human face expression basis” that deforms the source human face model to generate a target human face model. Sumner in view of Wood fails to explicitly teach: the source human model is a human base face model generated by combining the human base face model with a human face feature base; the source virtual character face model is a virtual character base face model generated by combining a virtual character base face model with a virtual character face feature base corresponding to the human face feature base. Danieau in view of Blanz teaches: the source human model is a human base face model generated by combining the human base face model with a human face feature base (Danieau: FIG. 6 illustrates from i) to iii) Different facial 3D meshes of a dataset, with column a) showing the original 3D meshes, column b) illustrating a naïve gradient EDFM … [0088]; Danieau: EDFM stands for “Exaggerating the Difference From the Mean” which consists in emphasizing the features that make a person unique i.e. different from the average face, [0029]; see Note 3A); the source virtual character face model is a virtual character base face model generated by combining a virtual character base face model with a virtual character face feature base corresponding to the human face feature base (see Note 3C). Note 3A: Figure 6 of Danieau showcases that a base human face model (the “original 3D meshes” under column a) of Figure 6, and as cited above in [0088]) may be modified by a EDFM that defines features used to create deformed face geometry. Danieau further showcases in Figure 6 that the face models under columns b) to f) retain a neutral composition, akin to the source human face model showcased in labelled in the figure associated with Note 1A above. Sumner teaches: “[deformation transfer] employs source and target meshes with matching reference poses much like facial animation uses a neutral face or skeleton-based techniques use a mesh in the T-pose,” (Pg. 2, Section 2: Background, par. 4). That is, when the source human base model has a matching reference pose, it is available for use in deformation transfer. Because the deformed face model taught by Danieau shares a reference pose with the source human model taught by Sumner (specifically, the pose shown in the “Reference” faces of Fig. 7 of Sumner), the deformed face model is analogous to the source human model. Note 3C: Sumner teaches: “The concept of deformation transfer can be posed as an analogy: given a pair of source meshes, S and S0, and a target mesh T, generate a new mesh T0 such that the relationship between T and T0 is analogous to the relationship between S and S0,” (Pg. 2, Section 2: Background, par. 4). In Note 2C, it was shown that Danieau teaches a source human model generated by combining a human base model with a human face feature base. Referring to the analogy posed by Sumner above, when the human base face model is S, the source human model is S0, and the virtual character base face model is T, it would be obvious to one of ordinary skill in the art to generate a source virtual character face model by combining a virtual character base face model with a virtual character face feature base corresponding to the human face feature base, because Blanz teaches that shape and facial expressions are deformations that may be applied to a morphable model (Blanz, Figure 7, Pg. 7), and because Sumner teaches that deformations or “relationships” may be transferred to other meshes. Generating the target human face model by combining the human base face model with a human face feature base, as in Danieau, would benefit the Sumner in view of Wood teachings by enhancing the artistic inspiration of a given target model: “Since proportions are based on distance ratios, it is assumed that exaggerating the distances between the 3D points also exaggerates the proportions. Proportions are not likely to be “more normal” after exaggerating the distances. A common observation is that the caricatures are more diverse and less linear than the real faces. This superior diversity motivates the choice of taking into account the caricatures that are made by artists.” (Danieau, [0090]). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Blanz with Sumner in view of Danieau, because Danieau cites Blanz by reference and establishes how the method of Blanz benefits the method proposed by Danieau: “Blanz and Vetter (V. Blanz and T. Vetter, “A morphable model for the synthesis of 3d faces,” 1999) learnt a Principal Component Analysis model over 200 3D textured faces. Their system allows caricature generation by increasing the distance to the statistical mean in terms of geometry and texture,” [0029]. Regarding claim 9: Claim 9 is substantially similar to Claim 2, and is therefore rejected for similar reasons. Claim 9 contains the following notable differences: Claim 9 claims a system instead of a method. In the rejection of Claim 8, the relevant independent claim, it was shown that Sumner teaches a system. Regarding claim 10: Claim 10 is substantially similar to Claim 3, and is therefore rejected for similar reasons. Claim 10 contains the following notable differences: Claim 10 claims a system instead of a method. In the rejection of Claim 8, the relevant independent claim, it was shown that Sumner teaches a system. Regarding claim 16: Claim 16 is substantially similar to Claim 2, and is therefore rejected for similar reasons. Claim 16 contains the following notable differences: Claim 16 claims a non-transitory computer-readable medium instead of a method. In the rejection of Claim 15, the relevant independent claim, it was shown that Sumner teaches a system. Regarding claim 17: Claim 17 is substantially similar to Claim 3, and is therefore rejected for similar reasons. Claim 17 contains the following notable differences: Claim 17 claims a non-transitory computer-readable medium instead of a method. In the rejection of Claim 15, the relevant independent claim, it was shown that Sumner teaches a system. Claims 4, 5, 11, 12, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sumner et al.: (NPL: Deformation Transfer for Triangle Meshes, from applicant’s IDS) in view of Wood (NPL: A 3D morphable eye region model for gaze estimation) and Carr (US 20130076619 A1). Regarding claim 4: Sumner in view of Wood teaches: The method of claim 1 (as shown above), Sumner in view of Wood fails to explicitly teach: wherein deforming the face region without deforming the non-face region comprises enabling constraints related to the face region and disabling constraints related to the non-face region. Carr teaches: wherein deforming a selected region (Carr: Points on the 3-D model within the deformation curve's region of influence (i.e., that have weights greater than 0) are deformed according to the deformation of the curve. [0047]) without deforming the non-selected region (Carr: Note that only points in the region of influence of the curve (i.e., points that have weights greater than 0) will have a deformation applied to them [0047]) comprises enabling constraints related to the selected region (Carr: the two dashed curves in FIG. 3A illustrate a constraint on the region of influence of the deformation curve, according to at least some embodiments) and disabling constraints related to the non-selected region (Carr: In some embodiments, all points on the mesh outside these two constraint curves may be given weights of 0 [0046]; see also Note 4A). Note 4A: Carr teaches that their method may operate on various 3D models, such as a hand (Fig. 9A-9B), a cactus (Fig. 9C-9D) and a dragon (Fig. 9E-9F). Therefore, it would be obvious to one of ordinary skill in the art to apply the teachings of Carr to a face. In Wood, it is taught that the eyeball model (previously analogized to the claimed non-face region in claim 1) is separate from the face and that Wood intends to deform the eyeball separate from the eye region on the face mesh. Therefore, when the teachings of Carr are combined with the teachings of Sumner in view of Wood, it would be obvious to one of ordinary skill in the art to deform the face region without deforming the non-face region by enabling constraints related to the face region and disabling constraints related to the non-face region. Similarly, the reverse is true. That is, one would find it obvious to deform the eyeball model in Wood without deforming the eye region on the face mesh. Therefore, one of ordinary skill in the art would also find it obvious to deform the non-face region without deforming the deformed face region by enabling constraints related to the non-face region and disabling constraints unrelated to the non-face region. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Carr with Sumner in view of Wood. Deforming a selected region without deforming the non-selected region by enabling constraints related to the selected region and disabling constraints related to the non-selected region, as in Carr, would benefit the Sumner in view of Wood teachings because the deformation curve taught by Carr enable more freeform transformations: “Unlike previous freeform deformation techniques, embodiments are not dependent on manipulation of a fixed set of parameters to perform deformations, and may provide for both local and global deformation.” (Carr, [0006]). Regarding claim 5: Sumner in view of Wood teaches: The method of claim 1 (as shown above), Sumner in view of Wood fails to explicitly teach: wherein deforming the non-face region without deforming the deformed face region comprises enabling constraints related to the non-face region and disabling constraints unrelated to the non-face region. Carr teaches: wherein deforming a selected region (Carr: Points on the 3-D model within the deformation curve's region of influence (i.e., that have weights greater than 0) are deformed according to the deformation of the curve. [0047]) without deforming the non-selected region (Carr: Note that only points in the region of influence of the curve (i.e., points that have weights greater than 0) will have a deformation applied to them [0047]) comprises enabling constraints related to the selected region (Carr: the two dashed curves in FIG. 3A illustrate a constraint on the region of influence of the deformation curve, according to at least some embodiments) and disabling constraints related to the non-selected region (Carr: In some embodiments, all points on the mesh outside these two constraint curves may be given weights of 0 [0046]; see also Note 4A). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Carr with Sumner in view of Wood. Deforming a selected region without deforming the non-selected region by enabling constraints related to the selected region and disabling constraints related to the non-selected region, as in Carr, would benefit the Sumner in view of Wood teachings because the deformation curve taught by Carr enable more freeform transformations: “Unlike previous freeform deformation techniques, embodiments are not dependent on manipulation of a fixed set of parameters to perform deformations, and may provide for both local and global deformation.” (Carr, [0006]). Regarding claim 11: Claim 11 is substantially similar to Claim 4, and is therefore rejected for similar reasons. Claim 11 contains the following notable differences: Claim 11 claims a system instead of a method. In the rejection of Claim 8, the relevant independent claim, it was shown that Sumner teaches a system. Regarding claim 12: Claim 12 is substantially similar to Claim 5, and is therefore rejected for similar reasons. Claim 12 contains the following notable differences: Claim 12 claims a system instead of a method. In the rejection of Claim 8, the relevant independent claim, it was shown that Sumner teaches a system. Regarding claim 18: Claim 18 is substantially similar to Claim 4, and is therefore rejected for similar reasons. Claim 18 contains the following notable differences: Claim 18 claims a non-transitory computer-readable medium instead of a method. In the rejection of Claim 15, the relevant independent claim, it was shown that Sumner teaches a system. Regarding claim 19: Claim 19 is substantially similar to Claim 5, and is therefore rejected for similar reasons. Claim 19 contains the following notable differences: Claim 19 claims a non-transitory computer-readable medium instead of a method. In the rejection of Claim 15, the relevant independent claim, it was shown that Sumner teaches a system. Claims 6, 7, 13, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sumner et al.: (NPL: Deformation Transfer for Triangle Meshes, from applicant’s IDS) in view of Wood (NPL: A 3D morphable eye region model for gaze estimation), Carr (US 20130076619 A1) and Satio (NPL: Smooth Contact-Aware Facial Blendshapes Transfer; from Applicant’s IDS). Regarding claim 6: Sumner in view of Wood and Carr teaches: The method of claim 5 (as shown above), Sumner in view of Wood and Carr fails to teach: wherein the constraints related to the non-face region comprise constraints defined based on virtual triangles connecting the face region and the non-face region. Satio teaches: wherein the constraints (Satio: This section describes three additional constraints to the original deformation transfer: contact, smoothness, and anchoring, Pg. 4, Section 4: Extended Deformation Transfer) related to the non-face region comprise constraints defined based on virtual triangles (Satio: Contact awareness can be naturally incorporated into deformation transfer by simply adding virtual triangles where the interactions occur, in the spirit of [Ho et al. 2010], Pg. 4, Section 4.1: Contact awareness) connecting the face region and the non-face region (Satio: Virtual triangles are then formed by filling in the hole defined by specified vertices and triangulating that region, Pg. 4, Section 4.1: Contact awareness; see Note 6A). Note 6A: Satio teaches that virtual triangles may be generated by filling in “holes” defined by vertices. In Wood, it was shown that a non-face region, such as an eyeball, may be separate from the eye region on the face mesh. Therefore, when combining Satio with the teachings of Sumner in view of Wood and Carr, one of ordinary skill in the art would understand Satio teach that the virtual triangles should fill in the “hole” between the eyeball and eye region on the face mesh. Satio teaches: Deformation transfer [Sumner and Popovic 2004] is a technique that applies deformation of an example animation to other target objects in the gradient space. […] The original deformation transfer technique has proven to be effective for the blendshapes transfer in the production of Captain Harlock. However we also noticed room for further improvement where the penetrations and separations of adjacent parts start to appear” (Pg. 3, Facial Blendshapes Transfer, par. 2) Satio further teaches that: “While this transfer pipeline proves to be effective, it also exposes the limitations of the original deformation transfer technique. Firstly, as mentioned in [Sumner 2005], it is not contact-aware, which for facial blendshapes causes eyelids to penetrate, or not shut completely. Secondly, it is not smoothness-aware, often yielding crumpling artifact where the mesh is extremely concentrated, e.g. eye lids and mouth corners. In this paper, we propose a method to cope with these problems,” (Pg. 3, Section 2: Background) Note that Sumner of 2004 as discussed by Satio refers to the primary reference Sumner cited in this rejection. Therefore, it would be obvious to one of ordinary skill in the art to combine Satio with Sumner in view of Wood and Carr, because Satio explicitly teaches an improvement over Sumner. Regarding claim 7: Sumner in view of Wood, Carr, and Satio teaches: The method of claim 6 (as shown above), wherein the constraints related to the non-face region comprise constraints defined based on pairs of corresponding surfaces in the face region and the non-face region of the source virtual character face model (Satio: The virtual triangles help to preserve the spatial relationships between separate parts of the face. Thus the contacts and separations dynamically happening with the original shape will be correctly transferred to the target shape, Pg. 2, Section 1: Introduction, par. 4; see Note 7A). Note 7A: Satio discusses that the virtual triangles “preserve the spatial relationships between separate parts of the face”. In Note 6A, it was shown that said triangles are “formed by filling in the hole defined by specified vertices and triangulating that region”. Therefore, one of ordinary skill in the art would understand that the virtual triangles will be formed based on regions of the mesh that should maintain a spatial relationship, i.e., said regions correspond. Regarding claim 13: Claim 13 is substantially similar to Claim 6, and is therefore rejected for similar reasons. Claim 13 contains the following notable differences: Claim 13 claims a system instead of a method. In the rejection of Claim 8, the relevant independent claim, it was shown that Sumner teaches a system. Regarding claim 14: Claim 14 is substantially similar to Claim 7, and is therefore rejected for similar reasons. Claim 14 contains the following notable differences: Claim 14 claims a system instead of a method. In the rejection of Claim 8, the relevant independent claim, it was shown that Sumner teaches a system. Regarding claim 20: Claim 20 is substantially similar to Claim 6, and is therefore rejected for similar reasons. Claim 20 contains the following notable differences: Claim 20 claims a non-transitory computer-readable medium instead of a method. In the rejection of Claim 15, the relevant independent claim, it was shown that Sumner teaches a system. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT ALEXANDER PROVIDENCE whose telephone number is (571)270-5765. The examiner can normally be reached Monday-Thursday 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT ALEXANDER PROVIDENCE/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Feb 06, 2024
Application Filed
Aug 08, 2025
Non-Final Rejection — §103
Dec 11, 2025
Response Filed
Feb 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586303
GEOMETRY-AWARE THREE-DIMENSIONAL SYNTHESIS IN ALL ANGLES
2y 5m to grant Granted Mar 24, 2026
Patent 12530847
IMAGE GENERATION FROM TEXT AND 3D OBJECT
2y 5m to grant Granted Jan 20, 2026
Patent 12530808
Predictive Encoding/Decoding Method and Apparatus for Azimuth Information of Point Cloud
2y 5m to grant Granted Jan 20, 2026
Patent 12524946
METHOD FOR GENERATING FIREWORK VISUAL EFFECT, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12380621
COMPUTER-IMPLEMENTED SYSTEMS AND METHODS FOR GENERATING ENHANCED MOTION DATA AND RENDERING OBJECTS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month