Prosecution Insights
Last updated: April 19, 2026
Application No. 18/214,921

IMAGE DATA PROCESSING METHOD, METHOD AND APPARATUS FOR CONSTRUCTING DIGITAL VIRTUAL HUMAN, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT

Final Rejection §103
Filed
Jun 27, 2023
Examiner
SATCHER, DION JOHN
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
33 granted / 39 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 39 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s Amendments filed on 12/31/2025 has been entered and made of record. Currently pending Claim(s): Independent Claim(s): Amended Claim(s): Cancelled Claim(s): 1–4, 6–11, 13–18 and 20 1, 8 and 15 1, 7, 8, 14 and 15 5, 12 and 19 Response to Applicant’s Arguments This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on 12/31/2025. In view of the amendments filed on 12/31/2025 to the specification, the specification objections is withdrawn. Applicant’s reply (December 31, 2025) includes substantive amendments to the claims. This Office Action has been updated with new grounds of rejection addressing those amendments. Further applicant’s arguments/remarks with respect to independent claim(s) 1, 8 and 15 have been considered but are moot because the arguments do not apply to any of references being used in the current rejection and the arguments are not rejected by newly cited reference Wampler (US 20180130256 A1) as explained in the body of the rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1, 8, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Booth et al. (See NPL attached, “A 3D Morphable Model learnt from 10,000 faces”, hereafter, “Booth”) in view of Joris et al. (US 20220020214 A1, hereafter, “Joris”) further in view of Wampler (US 20180130256 A1, hereafter, “Wampler”). Regarding claim 1, Booth teaches [an image data processing method performed by a computer device, the method comprising]: acquiring at least two initial facial models and [a fused weight model shared by the initial facial models], the at least two initial facial models having a same topological structure of a 3D virtual character corresponding to a digital virtual human and the topological structure including a plurality of vertexes and a plurality of connecting edges between the plurality of vertexes (See Booth, [Pg. 5545, Col. 2, ln. 16–20, 4.1. 3DMM construction], Dense correspondence: A collection of meshes are reparametrized into a form where each mesh has the same number of vertices joined into a triangulation that is shared across all meshes. Furthermore, the semantic or anatomical meaning of each vertex is shared across the collection); [determining an edge vector of each connecting edge] and a connecting matrix of each of the initial facial models based on the topological structure (See Booth, [Pg. 5545, Col. 2, ln. 7–10, 4. Background], The geometry of a 3D facial mesh is defined by the vector X = x 1 T , x 2 T , … , x n T T ∈ R 3 , where n is the number of vertices and x i = x x i , x y i , x z i T ∈ R 3 describes the X, Y, and Z coordinates of the i-th vertex. Note: Examiner is interpreting the vector x as the connecting matrix), wherein the connecting matrix of an initial facial model represents vertex information of each connecting edge in the initial facial model (See Booth, [Pg. 5545, Col. 2, ln. 7–10, 4. Background], The geometry of a 3D facial mesh is defined by the vector X = x 1 T , x 2 T , … , x n T T ∈ R 3 , where n is the number of vertices and x i = x x i , x y i , x z i T ∈ R 3 describes the X, Y, and Z coordinates of the i-th vertex); [determining a fused edge vector of each connecting edge based on the fused weight model shared by the initial facial models and the edge vector of the corresponding connecting edge in each of the initial facial models; determining fused position information of each vertex of each connecting edge in the topological structure based on the fused edge vector of the corresponding connecting edge and the connecting matrix of each of the initial facial models; and generating a fused facial model for the 3D virtual character based on the fused position information of each vertex of each connecting edge in the topological structure]. However, Booth fail(s) to teach an image data processing method performed by a computer device, the method comprising; a fused weight model shared by the initial facial models; determining an edge vector of each connecting edge; determining a fused edge vector of each connecting edge based on the fused weight model shared by the initial facial models and the edge vector of the corresponding connecting edge in each of the initial facial models; determining fused position information of each vertex of each connecting edge in the topological structure based on the fused edge vector of the corresponding connecting edge and the connecting matrix of each of the initial facial models; and generating a fused facial model for the 3D virtual character based on the fused position information of each vertex of each connecting edge in the topological structure. Joris, working in the same field of endeavor, teaches: an image data processing method performed by a computer device (See Joris, ¶ [0059], FIG. 7 shows a suitable computing system 800 for hosting the system 1 of FIG. 1. Computing system 900 may in general be formed as a suitable general-purpose computer and comprise a bus 910, a processor 902, a local memory 904, one or more optional input interfaces 914, one or more optional output interfaces 916 a communication interface 912, a storage element interface 906 and one or more storage elements 908. Bus 910 may comprise one or more conductors that permit communication among the components of the computing system), the method comprising Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Booth’s reference to an image data processing method performed by a computer device, the method comprising: based on the method of Joris’s reference. The suggestion/motivation would have been to optimize a mesh by incorporating another mesh (See Joris, ¶ [0002–0008]). However, Booth and Joris fail(s) to teach a fused weight model shared by the initial facial models; determining an edge vector of each connecting edge; determining a fused edge vector of each connecting edge based on the fused weight model shared by the initial facial models and the edge vector of the corresponding connecting edge in each of the initial facial models; determining fused position information of each vertex of each connecting edge in the topological structure based on the fused edge vector of the corresponding connecting edge and the connecting matrix of each of the initial facial models; and generating a fused facial model for the 3D virtual character based on the fused position information of each vertex of each connecting edge in the topological structure. Wampler, working in the same field of endeavor, teaches: a fused weight model shared by the initial facial models (See Wampler, ¶ [0114], Specifically, as shown in FIG. 3, the stylized mesh deformation system combines the edge-specific as-rigid-as-possible-deformation measure for each input mesh 202-206 by multiplying the edge-specific as-rigid-as-possible-deformation measures by the weights 312-316 and summing the results together. Note: The examiner is interpreting the shared as fused weight model as the weights being input into the same interpolation and summed); determining an edge vector of each connecting edge (See Wampler, ¶ [0147], For example, the stylized mesh deformation system holds rotation of each edge group in the input meshes constant and then minimizes the ARAP combined shape-space. Note: Examiner is interpreting the edge groups as the edge vector); determining a fused edge vector of each connecting edge based on the fused weight model shared by the initial facial models (See Wampler, ¶ [0139], To avoid scaling artifacts or negative shape contributions the stylized mesh deformation system can also constrain ∀ g , s: B ≥ 0 and ∀ g : B g s = 1 . Since b l e n d g ( P ,   B ) computes a vector of interpolated edge lengths, equation 8 uses the notation [ b l e n d g P ,   B ] e   to refer to a single element corresponding to the e th edge. Moreover, subscript e is used to denote how the element in this vector corresponding to a single edge is defined. ¶ [0140], then linearly blending the edges for the different shapes together according to the weights given by B g s (e.g., blending the input meshes 202-206 based on the weights 312-316). Note: The vector is based on the weights and the edges) and the edge vector of the corresponding connecting edge in each of the initial facial models (See Wampler, ¶ [0147], For example, the stylized mesh deformation system holds rotation of each edge group in the input meshes constant and then minimizes the ARAP combined shape-space, deformation interpolation measure to solve for translation of the edge groups and weights applicable to each input edge group and input mesh. ¶ [0122], Upon rotating the edge group 310, the stylized mesh deformation system blends the rotated edge group 310 from each input mesh 202-206 according to the weights 312-316. Specifically, the stylized mesh deformation system utilizes a blending algorithm (e.g., a shape-space blending algorithm) to combine the rotated edge group 310 from each input mesh 202-206 to generate a blended edge group 318); determining fused position information of each vertex of each connecting edge in the topological structure based on the fused edge vector of the corresponding connecting edge and the connecting matrix of each of the initial facial models (See Wampler, ¶ [0147], Upon obtaining input meshes, the method 400 performs the act 404 of holding rotation constant and solving for translation and weights, …, Indeed, by utilizing a linear blend skinning deformation model as described above, solving for translation and weights is equivalent to solving for a vector of vertex positions representing a modified mesh. ¶ [0155], To solve for T and B while holding R fixed (e.g., the act 404) one or more embodiments of the stylized mesh deformation system utilize a linear blend skinning deformation model as described above. In particular, in utilizing a linear blend skinning deformation model, solving for T is equivalent to solving for the vector of vertex positions q representing the modified mesh. Note: This results is calculating the vector and utilizing the equation solves for the vertex position. Meshes are vertices and edges and solving for and creating/combining a mesh is implicitly determining the vertices and edges with a vertex specifically representing a position on the mesh); and generating a fused facial model for the 3D virtual character based on the fused position information of each vertex of each connecting edge in the topological structure (See Wampler, ¶ [0151], In particular, the stylized mesh deformation system can generate a modified mesh based on the translation, rotation, and weights solved in relation to the acts 404 and 406. In particular, FIG. 5 illustrates a first plurality of modified meshes 502-506 (i.e., different configurations of a horse), a second plurality of modified meshes 512-516 (i.e., different configurations of a face). Note: Examiner is interpreting the fused facial model as the modified mesh that can represent a face). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Booth’s reference to determining an edge vector of each connecting edge; determining a fused edge vector of each connecting edge based on the fused weight model shared by the initial facial models and the edge vector of the corresponding connecting edge in each of the initial facial models; determining fused position information of each vertex of each connecting edge in the topological structure based on the fused edge vector of the corresponding connecting edge and the connecting matrix of each of the initial facial models; and generating a fused facial model for the 3D virtual character based on the fused position information of each vertex of each connecting edge in the topological structure based on the method of Wampler’s reference. The suggestion/motivation would have been to reduce artifacts from combining and improve the processing for real-time animation (See Wampler, ¶ [0009–0011 and 0013]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Booth with Joris to obtain the invention as specified in claim 1. Regarding claim 8, claim 8 is rejected the same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to the claim 8, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference. Furthermore, Jaris teaches a computer device (Fig. 7 – computer system 900), comprising: a memory, configured to store an executable instruction (Para ¶ [0059]; Fig. 7 - Local memory 904 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 902); and a processor (FIG. 7 – processor 902), configured to implement, when executing the executable instruction stored in the memory (Para ¶ [0059]; Fig. 7 - Local memory 904 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 902), an image data processing method including(See Joris, ¶ [0016], The first mesh is preferably located in the image of the UV map of the second mesh). Regarding claim 15, claim 15 is rejected the same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to the claim 15, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference. Furthermore, Jaris teaches a computer device, comprising: a memory, configured to store an executable instruction; and a processor, configured to implement, when executing the executable instruction stored in the memory, an image data processing method including (See Joris, ¶ [0059], FIG. 7 shows a suitable computing system 800 for hosting the system 1 of FIG. 1. Computing system 900 may in general be formed as a suitable general-purpose computer and comprise a bus 910, a processor 902, a local memory 904, one or more optional input interfaces 914, one or more optional output interfaces 916 a communication interface 912, a storage element interface 906 and one or more storage elements 908. Bus 910 may comprise one or more conductors that permit communication among the components of the computing system). Claim(s) 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Booth et al. (See NPL attached, “A 3D Morphable Model learnt from 10,000 faces”, hereafter, “Booth”) in view of Joris et al. (US 20220020214 A1, hereafter, “Joris”) further in view of Wampler (US 20180130256 A1, hereafter, “Wampler”) and further in view of Zi–Hang et al. (See NPL attached, “Disentangled Representation Learning for 3D Face Shape”, hereafter, “Zi–Hang”). Regarding claim 2, Booth in view of Joris further in view of Wampler teaches the method according to claim 1, wherein the [determining an edge vector of a connecting edge] and a connecting matrix of each of the initial facial models based on the topological structure (See Booth, [Pg. 5545, Col. 2, ln. 7–10, 4. Background], The geometry of a 3D facial mesh is defined by the vector X = x 1 T , x 2 T , … , x n T T ∈ R 3 , where n is the number of vertices and x i = x x i , x y i , x z i T ∈ R 3 describes the X, Y, and Z coordinates of the i-th vertex) comprises: acquiring position information and connecting information of vertexes in each of the initial facial models based on the topological structure (See Booth, [4. Background], The geometry of a 3D facial mesh is defined by the vector X = x 1 T , x 2 T , … , x n T T ∈ R 3 , where n is the number of vertices and x i = x x i , x y i , x z i T ∈ R 3 describes the X, Y, and Z coordinates of the i-th vertex. Note: Examiner is interpreting the vector X as the connecting information and the x i as the position information); [determining the edge vector of the connecting edge in each of the initial facial models based on the position information and connecting information of the vertexes in each of the initial facial models; and determining the connecting matrix of each of the initial facial models based on the connecting information of the vertexes]. However, Booth and Joris fail(s) to teach determining an edge vector of a connecting edge. Wampler, working in the same field of endeavor, teaches: determining an edge vector of a connecting edge (See Wampler, ¶ [0147], For example, the stylized mesh deformation system holds rotation of each edge group in the input meshes constant and then minimizes the ARAP combined shape-space. Note: Examiner is interpreting the edge groups as the edge vector). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Booth’s reference to determining an edge vector of a connecting edge based on the fused position information of each vertex of each connecting edge in the topological structure based on the method of Wampler’s reference. The suggestion/motivation would have been to reduce artifacts from combining and improve the processing for real-time animation (See Wampler, ¶ [0009–0011 and 0013]). However, Booth and Joris fail(s) to teach determining the edge vector of the connecting edge in each of the initial facial models based on the position information and connecting information of the vertexes in each of the initial facial models; and determining the connecting matrix of each of the initial facial models based on the connecting information of the vertexes. Zi–Hang, working in the same field of endeavor, teaches: determining the edge vector of the connecting edge in each of the initial facial models based on the position information and connecting information of the vertexes in each of the initial facial models (See Zi–Hang, [Pg. 11959, Col. 1, ln. 16–20, 3.1. Overview], We define a facial mesh as graph structure with a set of vertices V and edges, M = V , A with | V |   =   n . A ∈ 0,1 n × n represents the adjacency matrix, where A i j = 1 denotes an edge connection between vertex v i and v j , and A i j = 0 otherwise. In our framework, the facial meshes in the training data set contain the same connectivity, and each vertex is associated with a feature vector R d ); and determining the connecting matrix of each of the initial facial models based on the connecting information of the vertexes (See Zi–Hang, [Pg. 11959, Col. 1, ln. 16–20, 3.1. Overview], We define a facial mesh as graph structure with a set of vertices V and edges, M = V , A with | V |   =   n . A ∈ 0,1 n × n represents the adjacency matrix, where A i j = 1 denotes an edge connection between vertex v i and v j , and A i j = 0 otherwise. In our framework, the facial meshes in the training data set contain the same connectivity, and each vertex is associated with a feature vector R d ). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Booth’s reference to determining the edge vector of the connecting edge in each of the initial facial models based on the position information and connecting information of the vertexes in each of the initial facial models; and determining the connecting matrix of each of the initial facial models based on the connecting information of the vertexes based on the method of Zi–Hang’s reference. The suggestion/motivation would have been to accurately reconstruct 3D shapes (See Zi–Hang, [4.2. Evaluation Metric]. See also [Table 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Zi–Hang and Wampler with Booth and Jaris to obtain the invention as specified in claim 2. Regarding claim 9, claim 9 is rejected the same as claim 2 and the arguments similar to that presented above for claim 2 are equally applicable to the claim 9, and all of the other limitations similar to claim 2 are not repeated herein, but incorporated by reference. Regarding claim 16, claim 16 is rejected the same as claim 2 and the arguments similar to that presented above for claim 2 are equally applicable to the claim 16, and all of the other limitations similar to claim 2 are not repeated herein, but incorporated by reference. Claim(s) 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Booth et al. (See NPL attached, “A 3D Morphable Model learnt from 10,000 faces”, hereafter, “Booth”) in view of Joris et al. (US 20220020214 A1, hereafter, “Joris”) and further in view of Wampler (US 20180130256 A1, hereafter, “Wampler”) further in view of Park et al. (US 20100259538 A1, hereafter, “Park”). Regarding claim 3, Booth in view of Jaris in view of Wampler teaches the method according to claim 1, [wherein the fused weight model shared by the initial facial models includes a preset face dividing area graph comprising a plurality of areas, each area having a respective fused weight parameter corresponding to a respective physiological part of a human face]. However, Booth, Jaris and Wampler fail(s) to teach wherein the fused weight model shared by the initial facial models includes a preset face dividing area graph comprising a plurality of areas, each area having a respective fused weight parameter corresponding to a respective physiological part of a human face. Park, working in the same field of endeavor, teaches: wherein the fused weight model shared by the initial facial models includes a preset face dividing area graph comprising a plurality of areas, each area having a respective fused weight parameter corresponding to a respective physiological part of a human face (See Park, ¶ [0048], The facial animation generation unit 130 determines a blending weight for each facial region of each key model of a facial character, using a parameter that is determined based on an image frame having a facial region most similar to a facial region of a key model). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Booth’s reference to wherein the fused weight model shared by the initial facial models includes a preset face dividing area graph comprising a plurality of areas, each area having a respective fused weight parameter corresponding to a respective physiological part of a human face based on the method of Park’s reference. The suggestion/motivation would have been to produce accurate models using less time and effort (See Park, ¶ [0002–0009]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Park with Booth, Jaris and Wampler to obtain the invention as specified in claim 3. Regarding claim 10, claim 10 is rejected the same as claim 3 and the arguments similar to that presented above for claim 3 are equally applicable to the claim 10, and all of the other limitations similar to claim 3 are not repeated herein, but incorporated by reference. Regarding claim 17, claim 17 is rejected the same as claim 3 and the arguments similar to that presented above for claim 3 are equally applicable to the claim 17, and all of the other limitations similar to claim 3 are not repeated herein, but incorporated by reference. Claim(s) 6, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Booth et al. (See NPL attached, “A 3D Morphable Model learnt from 10,000 faces”, hereafter, “Booth”) further in view of Joris et al. (US 20220020214 A1, hereafter, “Joris”) further in view of Wampler (US 20180130256 A1, hereafter, “Wampler”) and further in view of Hao et al. (See NPL attached, “Example-Based Facial Rigging”, hereafter, “Hao”). Regarding claim 6, Booth in view of Jaris further in view of Wampler teaches the method according to claim 1, further comprising: [adjusting the fused weight model shared by the initial facial models based on the fused facial model and each of the initial facial models to obtain an adjusted fused weight model; and performing fusion processing on each of the initial facial models based on the adjusted fused weight model to obtain an adjusted fused facial model]. However, Booth, Jaris and Wampler fail(s) to teach adjusting the fused weight model shared by the initial facial models based on the fused facial model and each of the initial facial models to obtain an adjusted fused weight model; and performing fusion processing on each of the initial facial models based on the adjusted fused weight model to obtain an adjusted fused facial model. Hao, working in the same field of endeavor, teaches: adjusting the fused weight model shared by the initial facial models based on the fused facial model and each of the initial facial models to obtain an adjusted fused weight model (See Hao, [Pg. 32:3, Col. 2, ln. 42–44, B: Optimizing Weights], Given the computed set B of blendshapes, we can solve for the optimal weights a i j to reconstruct the training poses s j using least-squares fitting); and performing fusion processing on each of the initial facial models based on the adjusted fused weight model to obtain an adjusted fused facial model (See Hao, [Pg. 32:3, Col. 2, ln. 42–44, B: Optimizing Weights], Given the computed set B of blendshapes, we can solve for the optimal weights a i j to reconstruct the training poses s j using least-squares fitting). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Booth’s reference to adjusting the fused weight model shared by the initial facial models based on the fused facial model and each of the initial facial models to obtain an adjusted fused weight model; and performing fusion processing on each of the initial facial models based on the adjusted fused weight model to obtain an adjusted fused facial model based on the method of Hao’s reference. The suggestion/motivation would have been to accurately change the model to reduce errors (See Hao, [4 Evaluation]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Hao with Booth, Jaris and Wampler to obtain the invention as specified in claim 6. Regarding claim 13, claim 13 is rejected the same as claim 6 and the arguments similar to that presented above for claim 6 are equally applicable to the claim 13, and all of the other limitations similar to claim 6 are not repeated herein, but incorporated by reference. Regarding claim 20, claim 20 is rejected the same as claim 6 and the arguments similar to that presented above for claim 6 are equally applicable to the claim 20, and all of the other limitations similar to claim 6 are not repeated herein, but incorporated by reference. Claim(s) 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Booth et al. (See NPL attached, “A 3D Morphable Model learnt from 10,000 faces”, hereafter, “Booth”) in view of Joris et al. (US 20220020214 A1, hereafter, “Joris”) further in view of Wampler (US 20180130256 A1, hereafter, “Wampler”) and further in view of Sagar et al. (US 20210390751 A1, hereafter, “Sagar”). Regarding claim 7, Booth in view of Jaris further in view of Wampler teaches the method according to claim 1, further comprising: [acquiring limb model information of a digital virtual human; and constructing the digital virtual human based on the fused facial model and the limb model information]. However, Booth, Jarvis and Wampler fail(s) to teach acquiring limb model information of a digital virtual human; and constructing the digital virtual human based on the fused facial model and the limb model information. Sagar, working in the same field of endeavor, teaches: acquiring limb model information of a digital virtual human; and constructing the digital virtual human based on the fused facial model and the limb model information (See Sagar, ¶ [0090], Each region may be blended independently of the global blending, and combined together, and applied to the globally blended head model to form the final blended model. Blending and recompositing of the regions is achieved as follows. ¶ [0105], When blending body parts; customization of body types, muscle mass, and regional characteristics, for example, broad shoulders and big feet may be blended. Blending on body parts or body follows the outline above including regional blending again based on a muscle model). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Booth’s reference to acquiring limb model information of a digital virtual human; and constructing the digital virtual human based on the fused facial model and the limb model information based on the method of Sagar’s reference. The suggestion/motivation would have been to accurately combine virtual characters (See Sagar, ¶ [0002–0006]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Sagar with Booth, Jaris and Wampler to obtain the invention as specified in claim 7. Regarding claim 14, claim 14 is rejected the same as claim 7 and the arguments similar to that presented above for claim 7 are equally applicable to the claim 14, and all of the other limitations similar to claim 7 are not repeated herein, but incorporated by reference. Allowable Subject Matter Claim(s) 4, 11 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim(s) 4, 11 and 18 contain subject matter that is not disclosed or made obvious in the cited art. In regard to claim 4, when considering claim 4 as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “wherein the determining a fused edge vector of the connecting edge based on a fused weight model shared by the initial facial models and the edge vector of the connecting edge in each of the initial facial models comprises: determining an area where a connecting edge is located based on position information of the connecting edge in each of the initial facial models; acquiring, from the fused weight model, a fused weight parameter corresponding to the area in each of the initial facial models; and performing weighted summation on the edge vector of the connecting edge in each of the initial facial models and the corresponding fused weight parameter to obtain the fused edge vector of the connecting edge”. In regard to claim 11, when considering claim 11 as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “wherein the determining a fused edge vector of the connecting edge based on a fused weight model shared by the initial facial models and the edge vector of the connecting edge in each of the initial facial models comprises: determining an area where a connecting edge is located based on position information of the connecting edge in each of the initial facial models; acquiring, from the fused weight model, a fused weight parameter corresponding to the area in each of the initial facial models; and performing weighted summation on the edge vector of the connecting edge in each of the initial facial models and the corresponding fused weight parameter to obtain the fused edge vector of the connecting edge”. In regard to claim 18, when considering claim 18 as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “wherein the determining a fused edge vector of the connecting edge based on a fused weight model shared by the initial facial models and the edge vector of the connecting edge in each of the initial facial models comprises: determining an area where a connecting edge is located based on position information of the connecting edge in each of the initial facial models; acquiring, from the fused weight model, a fused weight parameter corresponding to the area in each of the initial facial models; and performing weighted summation on the edge vector of the connecting edge in each of the initial facial models and the corresponding fused weight parameter to obtain the fused edge vector of the connecting edge”. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Barbie et al. (US 11393168 B2) teaches methods, systems, and techniques for generating a new, animation-ready anatomy. A skin mesh of the new anatomy is obtained, such as by performing a 3D depth scan of a subject. Selected template anatomies are also obtained, with each of those template anatomies having a skin mesh that corresponds with the new anatomy's skin mesh. The skin meshes of the new and selected template anatomies share a first pose. Each of the selected template anatomies also has a skeleton for the first pose and skinning weights, and the skin mesh in at least one additional pose that is different from the first pose and any other additional poses. The method then involves using a processor to interpolate, from the at least one of the skeleton and skinning weights of the selected template anatomies and the first and at least one additional pose of the selected template anatomies, the new anatomy. Hutchinson (US 10818061 B2) teaches systems, methods, and non-transitory computer-readable media can identify a virtual deformable geometric model to be animated in a real-time immersive environment. The virtual deformable geometric model comprises a virtual model mesh comprising a plurality of vertices, a plurality of edges, and a plurality of faces. The virtual model mesh is iteratively refined in one or more iterations to generate a refined mesh. Each iteration of the one or more iterations increases the number of vertices, the number of edges, and/or the number of faces. The refined mesh is presented during real-time animation of the virtual deformable geometric model within the real-time immersive environment. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DION J SATCHER whose telephone number is (703)756-5849. The examiner can normally be reached Monday - Thursday 5:30 am - 2:30 pm, Friday 5:30 am - 9:30 am PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DION J SATCHER/ Patent Examiner, Art Unit 2676 /Henok Shiferaw/ Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Sep 26, 2025
Non-Final Rejection — §103
Dec 31, 2025
Response Filed
Jan 06, 2026
Applicant Interview (Telephonic)
Jan 06, 2026
Examiner Interview Summary
Feb 28, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586218
MOTION ESTIMATION WITH ANATOMICAL INTEGRITY
2y 5m to grant Granted Mar 24, 2026
Patent 12579787
INSTRUMENT RECOGNITION METHOD BASED ON IMPROVED U2 NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12573066
Depth Estimation Using a Single Near-Infrared Camera and Dot Illuminator
2y 5m to grant Granted Mar 10, 2026
Patent 12555263
SYSTEMS AND METHODS FOR TWO-STAGE OBJECTION DETECTION
2y 5m to grant Granted Feb 17, 2026
Patent 12548140
DETERMINING PROCESS DEVIATIONS THROUGH VIDEO ANALYSIS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 39 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month