Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments and Remarks
Applicant's arguments filed 12/19/25 have been fully considered as follows:
Applicant argues:
The Office Action alleges that the triangles in a 3D mesh in Finnigan is equivalent to a virtual triangle generated connecting the face region and the non-face region of the source virtual character model in claim 1 at issue. Office Action, p. 3 (citing Finnigan, Paragraph [0045]). This analogy is incorrect. The triangles in Finnigan are a type of polygon and they are part of the 3D mesh. Finnigan, paragraph [0045]. In contrast, the virtual triangles in claim 1 are not part of the source virtual character model, but generated to connect the face region and non-face region of the source virtual character model.
(Remarks, Page 21).
Applicant’s argument is unpersuasive because the features upon which applicant relies (i.e., “In contrast, the virtual triangles in claim 1 are not part of the source virtual character model, but generated to connect the face region and non-face region of the source virtual character model”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues:
The Office action also alleges that minimizing a distance between corresponding cloud points in Finnigan is equivalent to minimizing deformation of a virtual triangle in claim 1 at issue. Office Action, p. 4 (citing Finnigan, paragraph [0138]). However, minimizing a distance is different from minimizing deformation, just according to the plain language. Further, minimizing the distance between two cloud points in Finnigan is to converge a source cloud point and a target cloud point. Finnigan, paragraph [0053]. In contrast, minimized deformation to the one or more virtual triangles is a constraint for deforming the source virtual character model, as specified in amended claim 1.
(Remarks, Page 21).
Applicant’s argument is unpersuasive because the context of it’s usage in Finnigan is to minimize the deformation of the mesh by constraining deformation with a template: “the 3D mesh of a virtual avatar to generate a target 3D mesh of the virtual avatar such that the target 3D mesh has the topology of the template 3D mesh but matches the geometry of the source (input) 3D mesh.” (See ¶ 56).
Here deformation minimization would be minimizing distortion of the geometry features that are intended to be preserved.
Applicant argues:
The Office Action also alleges that reducing the number of polygons in a 3D mesh in Finnigan is equivalent to removing the one or more virtual triangles from the target virtual character model. Office Action, p. 4 (citing Finnigan, paragraph [0138]). This is incorrect. First, reducing is different from removing. Reducing the number of polygons in the 3D mesh does not remove all the polygons. In contrast, the one or more virtual triangles are all removed from the target virtual character model. Second, as clarified above, the polygons (e.g., triangles) are part of the 3D mesh in Finnigan. However, the virtual triangles are not part of a virtual character model, but generated to connect the face region and non-face region of the source virtual character. After the source virtual character model is deformed to generate the target virtual character model, the virtual triangles are removed from the target virtual character model.
(Remarks, Page 22).
Applicant’s argument is unpersuasive because the features upon which applicant relies (i.e., “remove all”, “the virtual triangles are not part of a virtual character” ) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Use of indicates a limitation is not explicitly disclosed by the reference alone.
Claim(s) 1, 5, 6, 8, 9, 13-16, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Finnigan (US 2025/0148720) in view of Zhang, Real-Time Facial Expression Driving based on 3D Facial Feature Point
Claim 1
Examiner’s Interpretation:
Interpretation of “Virtual Triangles”:
Applicant’s specification does not explicitly limit the scope of virtual triangles to other than the plain mean of the term in computer graphics (such as in generating polygonal meshes).
Accordingly, the scope of virtual triangles covers conventional use of polygons in computer graphics rendering.
In the context of usage in the claim, the scope of “virtual triangles” would only be limited to triangles that are “connecting the face region and the non-face region of the source virtual character model”
Interpretation of “constraint of minimized deformation”:
Applicant’s specification does not appear to limit the scope of the constraint to a particular technique. The plain meaning of the claim term would include any constraint which minimizes deformation.
Claim Mapping:
Finnigan discloses a computer implemented method for generating a virtual character (Fig. 3; ¶ 18: “FIG. 3 is a schematic that depicts an example of an end-to-end workflow to generate three-dimensional (3D) meshes of virtual characters, in accordance with some implementations.”), the method comprising:
accessing (Finnigan, abstract: “obtaining a source three-dimensional (3D) mesh of a face of an avatar, wherein the source 3D mesh includes a first plurality of polygons”))
the source virtual character model comprises a face region (Finnigan, ¶ 15: “3D) mesh of a face of an avatar”) and a non-face region1 (Finnigan, ¶ 46: “a higher mesh density in areas subject to higher deformation (movement), e.g., eyes, mouth, shoulders, elbows, etc.”); and
deforming the source virtual character model comprises:
generating, for the source virtual character model, one or more virtual triangles (Finnigan, ¶ 45: “A 3D mesh commonly includes a plurality of polygons, e.g., triangles, quads, etc., that are connected to form the 3D mesh. In some implementations, the quads may be divided into triangles during rendering of the 3D mesh. Each vertex of the polygon is associated with a respective 3D coordinate”) connecting the face region and the non-face region (Finnigan, ¶ 46: “Superior (good) topology is topology where the underlying mesh of the 3D object or virtual character is evenly distributed, with a higher mesh density in areas subject to higher deformation (movement), e.g., eyes, mouth, shoulders, elbows, etc. Additionally, realistic animation of a virtual character is enabled when the mesh vertices are aligned with muscle locations, and the mesh edges are aligned with muscle directions.”) of the source virtual character model (Finnigan, ¶ 124: “The fitted target 3D mesh 372 is provided to a radial basis function (RBF) deformation module 380 to determine an augmented target 3D mesh 382 that may include other attached features, e.g., eyeballs, ears, etc. The augmented target 3D mesh 382 may be provided to an animation module 390 and utilized to generate animation for the virtual character.”)
deforming the source virtual character model (Finnigan, ¶ 4: “Animation of a virtual character is commonly implemented via deformations of vertices of a 3D mesh of the virtual character”) with a constraint of minimized deformation to the one or more virtual triangles to generate the target virtual character model (Finnigan, ¶ 53: “The ICP technique is applied to minimize a distance between corresponding cloud points so that a source cloud point (a received 3D mesh with poor topology) and target cloud point (a template 3D mesh with good topology) converge.”); and
removing the one or more virtual triangles from the target virtual character model (Finnigan, ¶ 138: “FIG. 5 depicts source 3D mesh 510 that is retopologized (540) to a second 3D mesh 550. In this illustrative example, a number of polygons in the 3D mesh is reduced from an order of about a million polygons to about 40,000 polygons. Additionally, a number of faces (closed polygons in the 3D mesh) in the second 3D mesh is also fewer than a number of faces in the source 3D mesh.”); and
rendering the target virtual character model (Finnigan, ¶ 45: “The quality (e.g., smoothness, realism, etc.) of animation of a virtual character on a virtual experience platform depends on the quality of an underlying 3D mesh of the virtual character. The quality of a 3D mesh may be characterized by its topology. A 3D mesh commonly includes a plurality of polygons, e.g., triangles, quads, etc., that are connected to form the 3D mesh. In some implementations, the quads may be divided into triangles during rendering of the 3D mesh”)
Finnigan does not explicitly disclose, but Zhang makes obvious accessing a source human face model, a target human face model, and a source virtual character model (Fig. 1; source human model; target human model; usable to drive virtual character models (Figs. 2-4).
PNG
media_image1.png
437
582
media_image1.png
Greyscale
deforming the source virtual character model based on the source human face model and the target human face model to generate a target virtual character model (“Figure 3 shows the nonrigid tracking results of our algorithm. The marked part in the figure is the main no rigid motion part of the face. When eyes are closed and opened, this algorithm can capture the no rigid movement of eyes and eyebrows, and the expression of virtual characters is consistent with that of real faces. When the mouth is open, moving to the right and left, the expression of the virtual character and the corresponding three-dimensional feature points can conform to the mouth details of the real facial expression.”)
Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use a source and target human model.
One of ordinary skill in the art would have motivation to “provide more attractive interaction effects for interactive fields such as games and social networks.” (Section I). One of ordinary skill in the art would have had a reasonable expectation of success because Zhang’s method is applicable to virtually generated characters generally “Our method can not only use real faces to drive the expression of virtual characters, but also use virtual faces to drive, which is helpful to the real-time expression migration of virtual characters” and the underlying techniques used in Finnigan are common in the art for generation of virtual characters.
Claim 5
Finnigan discloses wherein deforming the source virtual character model with minimized deformation to the one or more virtual triangles comprises:
deforming the face region by fixing the non-face region and removing the one or more virtual triangles from the source virtual character model (Finnigan, ¶ 138: “FIG. 5 depicts source 3D mesh 510 that is retopologized (540) to a second 3D mesh 550. In this illustrative example, a number of polygons in the 3D mesh is reduced from an order of about a million polygons to about 40,000 polygons. Additionally, a number of faces (closed polygons in the 3D mesh) in the second 3D mesh is also fewer than a number of faces in the source 3D mesh.”);
adding the one or more virtual triangles back to the deformed source virtual character model (Finnigan, ¶ 46: “Superior (good) topology is topology where the underlying mesh of the 3D object or virtual character is evenly distributed, with a higher mesh density in areas subject to higher deformation (movement), e.g., eyes, mouth, shoulders, elbows, etc. Additionally, realistic animation of a virtual character is enabled when the mesh vertices are aligned with muscle locations, and the mesh edges are aligned with muscle directions.”); and
deforming the non-face region by fixing the deformed face region and by imposing a constraint of the minimized deformation to the one or more virtual triangles (¶¶ 149-150: “creating a flattened representation of the 3D shape with minimal area distortion. In some implementations, applying a LSCM technique may include solving a system of linear equations that minimizes the sum of squared errors between a desired angle preservation (between respective edges and/or line segments) and the actual angle changes under the mapping, effectively determining a best fit conformal transformation based on the least squares principle.”)
Claim 6
Finnigan discloses wherein the non-face region comprises an interior mouth region or an interior eye region (¶ 36: “or example, a smile of a virtual character in a virtual environment may be depicted by adjusting vertices of a mesh that corresponds to the mouth and/or other parts of the face of the virtual character, Similarly, animation of an avatar's face may be performed to depict an avatar speaking; adjustment of the eye(s) may be utilized to depict eye movements of a virtual character during a dance sequence. In some implementations, animation of the face may be performed to depict facial expressions of virtual characters associated with certain emotions of a virtual character.”)
Claim 8
Finnigan discloses wherein deforming the source virtual character model further comprises:prior to generating the one or more virtual triangles connecting the face region and the non-face region of the source virtual character model, creating a gap on the source virtual character model between the face region and the non-face region (¶ 145-147: “FIG. 6 depicts a 3D mesh (e.g., a 3D mesh of a virtual avatar after it has been retopologized) 610 and a trimmed 3D mesh of the virtual character 650, as well as the trimmed 3D mesh of the virtual character that includes the edges 660. As depicted in FIG. 6, portions corresponding to hair (615a, 615b, and 615c), the ears (620a and 620b), and the neck 630 are excluded from the 3D mesh to generate the trimmed 3D mesh 650 and edge-included trimmed 3D mesh 660. In some implementations, exclusion of the portions of the 3D mesh may include excluding vertices (3D coordinates) associated with the excluded portions from the second 3D mesh. In some implementations, all specified landmarks on a face are included in the trimmed 3D mesh and not considered for exclusion. In some implementations, a flood fill technique may be applied starting at the nose of the virtual avatar, and encountered vertices are included in the trimmed 3D mesh until a vertex obscured by hair, a vertex where the normal is pointing away from the front, or a vertex that meets a threshold distance from landmarks in the face of the virtual character is encountered. In some implementations, a largest connected component to the trimmed 3D mesh is additionally identified by the flood fill process.”)
Claim 9
The same teachings and rationales in claim 1 are appliable to claim 9.
Claim 13
The same teachings and rationales in claim 5 are appliable to claim 13.
Claim 14
The same teachings and rationales in claim 6 are appliable to claim 14.
Claim 15
The same teachings and rationales in claim 8 are appliable to claim 15.
Claim 16
The same teachings and rationales in claim 1 are appliable to claim 16.
Claim 20
The same teachings and rationales in claim 5 are appliable to claim 20.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Finnigan (US 2025/0148720) in view of Zhang, Real-Time Facial Expression Driving based on 3D Facial Feature Point and Kolen (US 2020/0306640)
Claim 7
Finnigan does not explicitly disclose, but Kolen makes obvious wherein the non-face region comprises an accessory of the virtual character (“The system may then generate a custom visual appearance model and a custom behavior model corresponding to the real person, which may subsequently be used to render, within a virtual environment of a video game, a virtual character that resembles the real person in appearance and in-game behavior.”)
Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use virtual accessories.
One of ordinary skill in the art would have motivation to increase personalization. One of ordinary skill in the art would have had a reasonable expectation of success because use of a mesh could be extended to virtual accessories depending on the desired rendering technique.
Allowable Subject Matter
Claim(s) 2-4, 10-12, 17-19 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reasons for Indicating Allowable Subject Matter:
Claim 2:
The claimed distances are not disclosed as Finnigan suggests comparison of distances between the template and the virtual character mesh.
Claims 3-4:
Parent claim 2 would be allowable.
Claims 10-12:
Substantially the same scope as claims 2-4.
Claims 17-19:
Substantially the same scope as claims 2-4.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN M GRAY/Primary Examiner, Art Unit 2611
1 The scope of non-face would include mesh areas related to the eyes, mouth interior, neck, etc. (“other models ("non-face region") such as an eye model and a mouth model. These models may contain internal areas that are not observable from the face of the virtual character. For example, an eye model may include an eye socket for holding the eyeball and the majority part of the eye socket is not observable from the face of the virtual character. Likewise, the mouth model of the virtual character also includes an interior portion holding the teeth and tongue of the virtual character that is not observable from the face of the virtual character. These non-face regions do not have corresponding portions in the human face model. As such, the deformation transfer from the source human face model S to the target human face model S does not provide information regarding the deformation of the non-face region. If directly applying the deformation transfer from the source human face model S to the target human face model 5 to the source virtual character A, the face region of the source virtual character A can be deformed properly, but the non-face region is not deformed. As a result, the non-face region may be dislocated relative to the face region leading to artifacts such as the eye socket or mouth part protruding outside the face.”)(Specification, ¶ 30).