DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in China on 10/16/2023. It is noted, however, that applicant has not filed a certified copy of the 202311338975 application as required by 37 CFR 1.55.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 8-10, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Amayeh U.S. Patent Application 20180088340 in view of Geisner U.S. Patent Application 20110246329.
Regarding claim 14, Amayeh discloses an electronic device, comprising:
at least one processor (processor); and
a memory (non-volatile memory) connected with the at least one processor communicatively; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a digital human generation method (paragraph [0035]: The local processing and data module 260 may comprise a hardware processor, as well as digital memory, such as non-volatile memory (e.g., flash memory), both of which may be utilized to assist in the processing, caching, and storage of data), comprising:
acquiring a corresponding target object model based on a picture of a to-be-generated digital human (paragraph [0024]: build a three-dimensional (3D) model of the user's face based on the images acquired by the imaging system; paragraph [0179]: accessing images of the face previously acquired by the wearable device or by another computing device; and generating the face model based at least partly on the analysis of images captured by the imaging system and the accessed images);
acquiring a corresponding point cloud of a head key feature in the picture from a pre-configured feature database based on the head key feature (paragraph [0139]: At block 1314, the face images can be fused together to produce a face model... the face may be treated as a point cloud; paragraph [0140]: Faces may also be modeled as collections of keypoints (such as, e.g., a set of sparse, distinct, and visually salient features), or may be modeled by the identification and localization of particular features unique to the face (e.g. eye corners, mouth corners, eyebrows, etc.); paragraph [0078]: The object recognizers 708a-708n may crawl through these collected points and recognize one or more objects using a map database at block 830); and
fusing the point cloud of the head key feature in the target object model to obtain a digital human figure (paragraph [0139]: these features may be “fused” together with mathematical combinations to minimize uncertainty in the features' locations; paragraph [0078]: the desired virtual scene (e.g., user in CA) may be displayed at the appropriate orientation, position, etc., in relation to the various objects and other surroundings of the user in New York).
Amayeh discloses all the features with respect to claim 14 as outlined above. However, Amayeh fails to disclose acquiring a head key feature in the picture from a pre-configured feature library based on the head key feature.
Geisner discloses acquiring a head key feature in the picture from a pre-configured feature library based on the head key feature (paragraph [0135]: the system may detect features of the user based on the generation of the models from the image data, point cloud data, depth data, or the like... based on the location of five key data points (i.e., eyes, corner points of the mouth, and nose), the system suggests a facial recommendation for a player. The facial recommendation may include at least one selected facial feature, an entire set of facial features, or it may be a narrowed subset of options for facial features from the features library 197; paragraph [0144]: The selections made by the system and/or the user may be applied to the target's visual representation at 816. The system may render the visual representation to the user).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Amayeh’s to use features library as taught by Geisner, to build virtual models from a library to assist a user for particular needs.
Claim 1 recites the functions of the apparatus recited in claim 14 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 14 applies to the method steps of claim 1.
Regarding claim 8, Amayeh as modified by Geisner discloses the method according to claim 1, wherein acquiring the corresponding target object model based on the picture of the to-be-generated digital human comprises:
extracting attribute features of the digital human based on the picture of the to-be-generated digital human (Geisner’s paragraph [0135]: the system may detect features of the user based on the generation of the models from the image data, point cloud data, depth data, or the like... based on the location of five key data points (i.e., eyes, corner points of the mouth, and nose), the system suggests a facial recommendation for a player; Amayeh’s paragraph [0129]: analyze the images by extracting identifiable features of the face using a keypoints detector and descriptor algorithm. Accordingly, the face may be represented by keypoints of identifiable features); and
acquiring the corresponding target object model from a preset model library based on the attribute features of the digital human; the model library comprising a plurality of object models (Geisner’s paragraph [0135]: The facial recommendation may include at least one selected facial feature, an entire set of facial features, or it may be a narrowed subset of options for facial features from the features library 197).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Amayeh’s to use features library as taught by Geisner, to build virtual models from a library to assist a user for particular needs.
Regarding claim 9, Amayeh as modified by Geisner discloses the method according to claim 8, wherein acquiring the corresponding target object model based on the picture of the to-be-generated digital human comprises:
if the attribute features of the digital human are not extracted based on the picture of the to-be-generated digital human, using a pre-configured standard model as the target object model (Geisner’s paragraph [0135]: The facial recommendation may include at least one selected facial feature, an entire set of facial features, or it may be a narrowed subset of options for facial features from the features library 197; paragraph [0144]: The selections made by the system and/or the user may be applied to the target's visual representation at 816. The system may render the visual representation to the user).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Amayeh’s to use features library as taught by Geisner, to build virtual models from a library to assist a user for particular needs.
Regarding claim 10, Amayeh as modified by Geisner discloses the method according to claim 1, wherein acquiring the corresponding point cloud of the head key feature in the picture from the pre-configured feature library based on the head key feature comprises:
acquiring target attribute information of the head key feature in the picture (Geisner’s paragraph [0135]: the system may detect features of the user based on the generation of the models from the image data, point cloud data, depth data, or the like... based on the location of five key data points (i.e., eyes, corner points of the mouth, and nose), the system suggests a facial recommendation for a player; Amayeh’s paragraph [0129]: analyze the images by extracting identifiable features of the face using a keypoints detector and descriptor algorithm. Accordingly, the face may be represented by keypoints of identifiable features; paragraph [0140]: Faces may also be modeled as collections of keypoints (such as, e.g., a set of sparse, distinct, and visually salient features), or may be modeled by the identification and localization of particular features unique to the face (e.g. eye corners, mouth corners, eyebrows, etc.)); and
acquiring the corresponding point cloud of the head key feature from the feature library based on the target attribute information of the head key feature (Geisner’s paragraph [0135]: the system may detect features of the user based on the generation of the models from the image data, point cloud data, depth data, or the like... The facial recommendation may include at least one selected facial feature, an entire set of facial features, or it may be a narrowed subset of options for facial features from the features library 197; Amayeh’s paragraph [0078]: The object recognizers 708a-708n may crawl through these collected points and recognize one or more objects using a map database at block 830).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Amayeh’s to use features library as taught by Geisner, to build virtual models from a library to assist a user for particular needs.
Claim 20 recites the functions of the apparatus recited in claim 14 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 14 applies to the medium steps of claim 20.
Claim 2, 11-12 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Amayeh U.S. Patent Application 20180088340 in view of Geisner U.S. Patent Application 20110246329, and further in view of He U.S. Patent Application 20140192212.
Regarding claim 15, Amayeh as modified by Geisner discloses detecting similarity between the head key feature in the target object model after the fusion and the head key feature in the picture, and the point cloud of the head key feature fused to the target object model (Geisner's paragraph [0141]: at 810, the system may select a subset of the feature options based on the detected feature. The system may select the subset as those features by comparing the similarities of the features in the features library 197 to the detected characteristics of the user. Sometimes, a feature will be very similar, but the system may still provide the user a subset of options to choose from at 810; Amayeh’s paragraph [0139]: At block 1314, the face images can be fused together to produce a face model... the face may be treated as a point cloud). However, Amayeh as modified by Geisner fails to disclose detecting whether similarity is larger than or equal to a preset similarity threshold; and in response to the determination that the similarity is smaller than the preset similarity threshold, based on triggering of a user, carrying out a face pinching operation.
He discloses detecting whether similarity is larger than or equal to a preset similarity threshold (paragraph [0114]: If the similarity is greater than the threshold, it indicates that the color and other characteristics at the edge of the second image are identical or similar to those in the position that accommodates the second image in the first image); and
in response to the determination that the similarity is smaller than the preset similarity threshold, based on triggering of a user, carrying out a face pinching operation (paragraph [0114]: if the similarity is smaller than the threshold… step 1004 may be executed; paragraph [0117]: Step 1004: Adjust the first image and/or the second image to increase the similarity between the edge area of the second image and the position that accommodates the second image in the first image).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Amayeh and Geisner’s to compare similarity as taught by He, to provide realistic synthesized images.
Claim 2 recites the functions of the apparatus recited in claim 15 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 15 applies to the method steps of claim 2.
Regarding claim 11, Amayeh as modified by Geisner and He discloses the method according to claim 10, further comprising:
if the head key feature corresponding to the target attribute information is not comprised in the feature library, carrying out the face pinching operation on the point cloud of the head key feature in the target object model based on triggering of the user (He’s paragraph [0114]: if the similarity is smaller than the threshold… step 1004 may be executed; paragraph [0117]: Step 1004: Adjust the first image and/or the second image to increase the similarity between the edge area of the second image and the position that accommodates the second image in the first image).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Amayeh and Geisner’s to compare similarity as taught by He, to provide realistic synthesized images.
Regarding claim 12, Amayeh as modified by Geisner and He discloses the method according to claim 1, further comprising:
before acquiring the corresponding point cloud of the head key feature in the picture from the pre-configured feature library based on the head key feature, collecting point clouds of a plurality of head key features of each of a plurality of characters and attribute information of the head key features, and store the point clouds and the attribute information in the feature library (He’s paragraph [0114]: if the similarity is smaller than the threshold, the first image and the second image may be saved directly, or step 1004 may be executed; Geisner’s paragraph [0135]: the system may detect features of the user based on the generation of the models from the image data, point cloud data, depth data, or the like... based on the location of five key data points (i.e., eyes, corner points of the mouth, and nose), the system suggests a facial recommendation for a player).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Amayeh and Geisner’s to compare similarity as taught by He, to provide realistic synthesized images.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Amayeh U.S. Patent Application 20180088340 in view of Geisner U.S. Patent Application 20110246329, and further in view of Xu U.S. Patent Application 20210390792.
Regarding claim 13, Amayeh as modified by Geisner discloses all the features with respect to claim 1 as outlined above. However, Amayeh as modified by Geisner fails to disclose registering the point cloud of the head key feature with the target object model; and migrating the point cloud of the head key feature to a corresponding head key feature region in the target object model.
Xu discloses registering the point cloud of the head key feature with the target object model; and migrating the point cloud of the head key feature to a corresponding head key feature region in the target object model (paragraph [0004]: reconstruct face details in the non-rigidly registered 3D face model by a Shape from Shading technology in the last image of the RGB-D image sequence, and generate a 3D neutral face model based on the deformed 3D face template model and the reconstructed 3D face template model; process the 3D neutral face model and a face hybrid template by a Deformation Transfer technology to generate a face hybrid model).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Amayeh and Geisner’s to transfer model as taught by Xu, to quickly generate a three-dimensional face animation.
Allowable Subject Matter
Claim 3-7 and 16-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 3 and 16 are about carrying out the face pinching operation on the point cloud of the head key feature fused to the target object model based on the triggering of the user with a pre-established implicit constraint surface and a preset dynamic curve as constraints.
Amayeh 20180088340, Geisner 20110246329, and He 20140192212 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Claim 4-6 depends on claim 3, are allowed base on same reason as claim 3.
Claim 17-19 depends on claim 16, are allowed base on same reason as claim 16.
Claim 7 is about detecting whether an accessory template of the target object model fits the digital human figure; and in response to the determination that the accessory template of the target object model does not fit the digital human figure, adjusting the accessory template in the digital human figure.
Amayeh 20180088340, Geisner 20110246329, and He 20140192212 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/YI YANG/
Primary Examiner, Art Unit 2616