DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 25 March 2026 have been fully considered but they are not persuasive. Applicant’s arguments with respect to the prior art have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, 9, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Takeda (U.S. Publication 2022/0237869) in view of Bailey (U.S. Publication 2021/0350621) and Fasogbon (WO 2021/173489).
As to claim 1, Takeda discloses an image processing method performed by a computer device (figs. 1-2), the method comprising:
constructing a three-dimensional facial mesh corresponding to a target object according to a target image of the target object (p. 1, section 0014-p. 2, section 0016; p. 3, section 0031; a 3D mesh is constructed according to a face, reading on the target object, in a particular received 2D image, reading on the target image);
transforming the three-dimensional facial mesh into a target UV map corresponding to the target object, the target UV map carrying position data of vertices on the three-dimensional facial mesh (p. 3, section 0032; p. 6, sections 0060-0062; the mesh is transformed to a UV map; UV mapping is performed to the vertex positions of the mesh using coordinates of the map);
Takeda does not disclose, but Bailey discloses inputting the target UV map into a face creation parameter prediction model (p. 4, section 0057-p. 5, section 0064; a UV map is obtained for a face and input to a CNN) and obtaining, as output from the face creation parameter prediction model, target face creation parameters for constructing a virtual facial image of a virtual character (p. 5, section 0059-p. 6, section 0070; vertex parameters relating to a deformed facial mesh are output for constructing a virtual face), wherein the virtual character is a virtual representation of the target object and is distinct from the target object (figs. 8, 9, 11; p. 14, sections 0154-0155; the animated character is a representation of an actor whose expressions are recorded; the facial proportions of the character and actor may differ making these distinct objects; the distinctness can also be seen with respect to the human vs. the animated character in the figures; the animated character reads on a “virtual” character with “virtual” representation of the actor since the character does not exist in the real world) and wherein the output carries three-dimensional structure information for constructing the virtual facial image of a virtual character (figs. 8, 9, 11; p. 5, section 0059-p. 6, section 0070; p. 14, sections 0154-0155; vertex parameters relating to a deformed facial mesh, which read on 3D structure information, are output for constructing a virtual face; the face is for a virtual character as noted above) and the applying the target face creation parameters is to a basic virtual facial image of the virtual character (p. 10, section 0123; p. 15, sections 0169-0171; the deformation parameters, including vertex offsets determined using the CNN, are applied to a neutral/basic facial mesh image for the character). The motivation for this is to decrease evaluation time for complex models (p. 1, sections 0007-0008). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Takeda to input the target UV map into a face creation parameter prediction model and obtain, as output from the face creation parameter prediction model, target face creation parameters for constructing a virtual facial image of a virtual character, wherein the virtual character is a virtual representation of the target object and is distinct from the target object, wherein the output carries three-dimensional structure information for constructing the virtual facial image of a virtual character, and have the applying the target face creation parameters be to a basic virtual facial image of the virtual character in order to is to decrease evaluation time for complex models as taught by Bailey.
Bailey discloses output face creation parameters, but not specifically an output UV map that is applied and distinct from the target UV map. Fasogbon, however, discloses an output UV map that includes target face creation parameters (p. 11-12, section 0039; p. 16, section 0049-p. 18, section 0054; the output UV map is applied, including target subject creation parameters such as geometry, color and depth; the subject can be a human and specifically a human face) and wherein the output UV map is distinct from the target UV map (p. 8, section 0033; p. 9-10, section 0036; p. 19, section 0059-p. 20, section 0060; the output/predicted UV map is distinct from input/target UV maps that represent ground truth) and carries three-dimensional structure information for constructing the virtual facial image (p. 11-12, section 0039; p. 16, section 0049-p. 18, section 0054; p. 20, section 0063-p. 21, section 0064; the output UV map carries 3D structure information such as geometry, color and depth; the subject can be a human and specifically a human face) and applying the output UV map including the target face creation parameters to a face (p. 11-12, section 0039; p. 16, section 0049-p. 18, section 0054; p. 20, section 0063-p. 21, section 0064; the output UV map is applied, including target subject creation parameters such as geometry, color and depth; the subject can be a human and specifically a human face). The motivation for this is to allow upscaling and transformation in a less complex, less resource-intensive way (p. 21, section 0065). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Takeda and Bailey to use an output UV map that includes target face creation parameters, wherein the output UV map is distinct from the target UV map and carries three-dimensional structure information for constructing the virtual facial image, and apply the output UV map including the target face creation parameters to a face in order to allow upscaling and transformation in a less complex, less resource-intensive way as taught by Fasogbon.
As to claim 5, see the rejection to claim 1. Further, Takeda discloses a computer device, comprising a processor and a memory, the memory being configured to store a computer program; and the processor being configured to implement an image processing method by executing the computer program (p. 4, sections 0038-0040).
As to claim 9, see the rejections to claims 1 and 5.
As to claim 13, Fasogbon discloses wherein training samples used for training the face creation parameter prediction model are constructed based on faces of real objects (p. 16, section 0049-p. 18, section 0054; samples used for training can be input face images; p. 19, section 0057-p. 20, section 0059; training images for creating the UV parameter prediction model are both real images of face objects as well as images based on real faces but with different features or parameters; the prediction can give parameters to construct a face as part of parameters for the overall subject). Motivation for the combination of references is given in the rejection to claim 1.
As to claim 14, Fasogbon discloses wherein the face creation parameter prediction model is trained on a target loss function according to a difference between a first training UV map and a first predicted UV map (p. 16, section 0049-p. 18, section 0054; training is performed using loss between a training/ground truth UV map geometry and color vs. a predicted UV map geometry and color for each location), the first predicted UV map corresponding to a first training three-dimensional facial mesh and being determined through a three-dimensional facial mesh prediction model according to predicted face creation parameters corresponding to the first training UV map (p. 8, section 0033; p. 10, section 0037-p. 11, section 0038; p. 12, section 0040; p. 15, section 0047; p. 17, section 0053-p. 18, section 0054; the predicted UV map, with inputs including features from the training data such as color/depth images corresponding to a 3D mesh, is determined through a model to predict 3D mesh geometry and color parameters for creating a subject image, and correspondence to the training map is evaluated for loss; as noted in the rejection to claim 1, the subject can be a human and specifically a human face). Motivation for the combination of references is given in the rejection to claim 1.
Claims 2, 6, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Takeda in view of Bailey and Fasogbon and further in view of Kim (U.S. Publication 2021/00374402).
As to claim 2, Takeda does not disclose, but Kim discloses wherein the transforming the three-dimensional facial mesh into a target UV map comprises: determining color channel values of pixel points in the basic UV map on the basis of a correspondence relationship between the vertices on the three-dimensional facial mesh and pixel points in a basic UV map and the position data of the vertices on the three-dimensional facial mesh (p. 5, sections 0067-0078; a UV mapping is performed using correspondence between 3D facial mesh vertices and specific pixels in a 2D UV map, which can read on the basic UV map; position data of vertices representing particular features is used to determine the correspondence and colors are stored to pixel positions in the uv_tex map); and determining the target UV map on the basis of the color channel values of the pixel points in the basic UV map (p. 6, section 0099; the basic map with color values, uv_tex, is used to infer another uv_tex map from a number of multi-view images, which can read on the target UV map). The motivation for this is to analyze imagery while reducing computational load compared to working purely in 3D space (p. 1, section 0007; p. 4, section 0055). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Takeda, Bailey, and Fasogbon to determine color channel values of pixel points in the basic UV map on the basis of a correspondence relationship between the vertices on the three-dimensional facial mesh and pixel points in a basic UV map and the position data of the vertices on the three-dimensional facial mesh and determine the target UV map on the basis of the color channel values of the pixel points in the basic UV map in order to analyze imagery while reducing computational load compared to working purely in 3D space as taught by Kim.
As to claim 6, see the rejection to claim 2.
As to claim 10, see the rejection to claim 2.
Claims 3, 7, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Takeda in view of Bailey, Fasogbon, and Kim and further in view of Saragih (U.S. Publication 2020/0402284)
As to claim 3, Kim discloses wherein the determining color channel values of pixel points in the basic UV map on the basis of a correspondence relationship between the vertices on the three-dimensional facial mesh and pixel points in a basic UV map and the position data of the vertices on the three-dimensional facial mesh comprises: for each patch on the three-dimensional facial mesh, determining, on the basis of the correspondence relationship, pixel points separately corresponding to the vertices in the patch from the basic UV map, and determining a color channel value of the corresponding pixel point according to the position data of each vertex (p. 5, sections 0067-0078; a UV mapping is performed using correspondence between 3D facial mesh vertices and specific pixels in a 2D UV map, which can read on the basic UV map; position data of vertices representing particular features is used to determine the correspondence and colors are stored to pixel positions in the uv_tex map; the triangles in the mesh associated with the vertices correspond to the claimed patches);
The combination of Takeda, Bailey, Fasogbon, and Kim does not disclose, but Saragih discloses determining a coverage region of the patch in the basic UV map according to the pixel points separately corresponding to the vertices in the patch, and rasterizing the coverage region (p. 6, section 0054; p. 7, section 0066-0069; correspondence is found from facial 3D mesh vertices to UV coordinates in the texture map; a coverage region is found and pixels within it are rasterized); and interpolating, on the basis of a quantity of pixel points comprised in the rasterized coverage region, the color channel values of the pixel points separately corresponding to the vertices in the patch, and taking the interpolated color channel values as color channel values of the pixel points in the rasterized coverage region (p. 6, section 0054; p. 7, section 0066-0069; corresponding colors of vertices are used to determine UV/texture pixels; color values are interpolated/blended using the UV/texture pixels to fill the region).
The motivation for this is to allow correspondence between captured images and a user’s facial expressions (p. 1, sections 0004-0005). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Takeda, Bailey, Fasogbon, and Kim to determine a coverage region of the patch in the basic UV map according to the pixel points separately corresponding to the vertices in the patch, and rasterize the coverage region, and interpolate, on the basis of a quantity of pixel points comprised in the rasterized coverage region, the color channel values of the pixel points separately corresponding to the vertices in the patch, and take the interpolated color channel values as color channel values of the pixel points in the rasterized coverage region in order to allow correspondence between captured images and a user’s facial expressions as taught by Saragih.
As to claim 7, see the rejection to claim 3.
As to claim 11, see the rejection to claim 3.
Claims 4, 8, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Takeda in view of Bailey, Fasogbon, and Kim and further in view of Stylianos (GB 2581524 A).
As to claim 4, Stylianos discloses wherein the determining the target UV map on the basis of the color channel values of the pixel points in the basic UV map comprises: determining a reference UV map on the basis of the color channel values of the respective pixel points in a target mapping region in the basic UV map, the target mapping region comprising coverage regions, in the basic UV map, of various patches on the three-dimensional facial mesh (p. 22, paragraphs 3-4; a basic UV map is created using the vertices of the 3D mesh; the areas/regions, which can read on target mapping regions, between the vertices can be interpolated to create a reference map); and mending the reference UV map when the target mapping region not completely covering the basic UV map to obtain the target UV map (p. 22, paragraphs 3-4; missing data is mended in the map when not every area in the areas/regions is covered to obtain filled target areas/regions). The motivation for this is to process 3D mesh representations by systems or methods that accept color images as input, such as 2D convolutional networks. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Takeda, Bailey, Fasogbon and Kim to determine a reference UV map on the basis of the color channel values of the respective pixel points in a target mapping region in the basic UV map, the target mapping region comprising coverage regions, in the basic UV map, of various patches on the three-dimensional facial mesh in order to process 3D mesh representations by systems or methods that accept color images as input, such as 2D convolutional networks as taught by Stylianos.
As to claim 8, see the rejection to claim 4.
As to claim 12, see the rejection to claim 4.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON M RICHER whose telephone number is (571)272-7790. The examiner can normally be reached 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON M RICHER/Primary Examiner, Art Unit 2617