DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determinat--ion of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 10, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshiro (JP 2002157605 A from IDS) in view of Scapel et al. (US 2020/0358725 A1).
Regarding claim 1, Yoshiro teaches:
A method of generating a sticker, comprising:
obtaining material images of a plurality of components on a character;([0023], “The lip image storage unit 102 stores, for example, the lip images 111, 112, and 113 shown in FIG. 2A, FIG. 2B, and FIG. 2C. These lip images 111 to 113 are cut out from an image obtained by imaging. Here, the lip image is cut along the contour of the lip, but a rectangular region or an elliptical region including the lip image may be cut out.” FIG. 2. [0029], “coordinates such as both eyes, a nose, a mouth, and a jaw of a face image are designated,”)
determining global positions of the components based on the material images;([0026], “First, for example, the face image 121 illustrated in FIG. 5 is input from the face image input unit 101 (step S201), and the lip image 113 in FIG. 2C in the lip image storage unit 102 is selected by an operation of an operation key (not illustrated). The image synthesis unit 103 processes the face image 121 by a well-known method to determine the position of the mouth of the face image 121”)
determining a target pose of the components under a target expression; ([0030], “when each of the lip images 111 to 113 in the lip image storage unit 102 is selected in an appropriate order and combined with the face image 121, and a series of synthetic face images 122 are formed, a moving image in which the mouth is opened and closed can be obtained.”) and
generating the sticker based on the material images, the global positions and the target pose;([0026], “First, for example, the face image 121 illustrated in FIG. 5 is input from the face image input unit 101 (step S201), and the lip image 113 in FIG. 2C in the lip image storage unit 102 is selected by an operation of an operation key (not illustrated). The image synthesis unit 103 processes the face image 121 by a well-known method to determine the position of the mouth of the face image 121, and superimposes the lip image 113 of FIG. 2C at this position to form the composite face image 122 shown in FIG. 6 (step S202)” [0009], “If the moving image of the lip is combined with the face image in this manner, a three-dimensional face model for opening and closing the mouth can be obtained as a result.”)
wherein the sticker comprises a changing expression of the character from an initial expression to the target expression.([0009], “If the moving image of the lip is combined with the face image in this manner, a three-dimensional face model for opening and closing the mouth can be obtained as a result.” [0030], “In addition, when each of the lip images 111 to 113 in the lip image storage unit 102 is selected in an appropriate order and combined with the face image 121, and a series of synthetic face images 122 are formed, a moving image in which the mouth is opened and closed can be obtained. Furthermore, instead of representing the movement of the lip by the lip images 111 to 113, a moving image of the lip may be formed by at least one lip image and data indicating the movement, and the moving image of the lip may be combined with the face image.”)
Yoshiro teaches an image character generating method. However, Yoshiro does not explicitly teach using this image generating method on generating an avatar sticker. Scapel teaches that an image generating method can be used to generating an avatar sticker.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have applied the method of Yoshiro to the specific avatar sticker generation application of Scapel to generate an avatar sticker.
Regarding claim 10, Yoshiro in view of Scapel teaches:
An electronic device, comprising: at least one processor and memory; the memory storing a computer executive instruction; the at least one processor executing a computer executive instruction stored by the memory, such that the at least one processor ([0001]) the rest of claim 10 recites similar limitations of claim 1, thus is rejected accordingly.
Claim 11 recites similar limitations of claim 10, thus is rejected accordingly.
Claim(s) 2-7, 14-16, 17-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshiro in view of Scapel and further in view of Cohen et al. (US 2018/0308276 A1).
Regarding claim 2, Yoshiro in view of Scapel teaches:
The method of generating a sticker of claim 1, wherein generating the sticker based on the material image, the global position, and the target pose comprises: determining motion poses of the components at a plurality of moments based on the target pose …; and generating the sticker based on the material images, the global positions and the motion poses of the components at the plurality of moments, wherein an expression of the avatar in the sticker at an initial moment in the plurality of moments is the initial expression. ([0009], “If the moving image of the lip is combined with the face image in this manner, a three-dimensional face model for opening and closing the mouth can be obtained as a result.” [0030], “In addition, when each of the lip images 111 to 113 in the lip image storage unit 102 is selected in an appropriate order and combined with the face image 121, and a series of synthetic face images 122 are formed, a moving image in which the mouth is opened and closed can be obtained. Furthermore, instead of representing the movement of the lip by the lip images 111 to 113, a moving image of the lip may be formed by at least one lip image and data indicating the movement, and the moving image of the lip may be combined with the face image.”)
However, Yoshiro in view of Scapel does not, but Cohen teaches:
determining motion poses of the components at a plurality of moments based on the target pose and a periodic function;(“
PNG
media_image1.png
860
346
media_image1.png
Greyscale
”)
Yoshiro in view of Scapel teaches an expression transition from starting point to end point, but does not explicitly teach the detailed step. Cohen teaches using a transition function with weight and time information to guide the expression.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Yoshiro in view of Scapel with the specific method of Cohen to generate an accurate expression animation.
Regarding claim 3, Yoshiro in view of Scapel and Yoshiro teaches:
The method of generating a sticker of claim 2, wherein determining the motion poses of the components at the plurality of moments based on the target pose and the periodic function comprises:
determining expression weights of the components at a plurality of moments based on the periodic function; and determining motion poses of the components at a plurality of moments based on the expression weights of the components at the plurality of moments and the target pose. (Yoshiro:
PNG
media_image1.png
860
346
media_image1.png
Greyscale
The combination rationale of claim 2 is incorporated here.)
Regarding claim 4, Yoshiro in view of Scapel and Yoshiro teaches:
The method of generating a sticker of claim 3, wherein determining the expression weights of the components at the plurality of moments based on the periodic function comprises: determining, through the periodic function, the expression weights of the components at the plurality of moments based on the number of frames of the sticker and a frame rate of the sticker.(Yoshiro:
PNG
media_image1.png
860
346
media_image1.png
Greyscale
Each time point represents an image generation, which corresponds to the frame rate. The combination rationale of claim 2 is incorporated here.)
Regarding claim 5, Yoshiro in view of Scapel and Yoshiro teaches:
The method of generating a sticker of claim 2, wherein before determining the motion poses of the components at the plurality of moments based on the target pose and the periodic function further comprises: determining the periodic function based on a duration of the sticker; wherein the periodic function is a sinusoidal function. (Yoshiro FIG. 11, button press and button release. The combination rationale of claim 2 is incorporated here.)
Regarding claim 6, Yoshiro in view of Scapel and Yoshiro teaches:
The method of generating a sticker of claim 2, wherein generating the sticker based on the material images, the global positions, and motion poses of the components at a plurality of moments comprises: determining, through a driving algorithm, a position and a shape of the material image on each frame of image in the sticker based on the global positions and motion poses of the components at the plurality of moments to obtain the sticker. (Yoshiro [0009], “If the moving image of the lip is combined with the face image in this manner, a three-dimensional face model for opening and closing the mouth can be obtained as a result.” [0030], “In addition, when each of the lip images 111 to 113 in the lip image storage unit 102 is selected in an appropriate order and combined with the face image 121, and a series of synthetic face images 122 are formed, a moving image in which the mouth is opened and closed can be obtained. Furthermore, instead of representing the movement of the lip by the lip images 111 to 113, a moving image of the lip may be formed by at least one lip image and data indicating the movement, and the moving image of the lip may be combined with the face image.”)
Regarding claim 7, Yoshiro in view of Scapel and Yoshiro teaches:
The method of generating a sticker of claim 2, wherein determining the target pose of the components under the target expression comprises: determining an expression motion corresponding to the target expression based on a predetermined corresponding relationship between a plurality of expression types and expression motions, the expression motion corresponding to the target expression comprising the target pose. (Yoshiro [0009], “If the moving image of the lip is combined with the face image in this manner, a three-dimensional face model for opening and closing the mouth can be obtained as a result.” [0030], “In addition, when each of the lip images 111 to 113 in the lip image storage unit 102 is selected in an appropriate order and combined with the face image 121, and a series of synthetic face images 122 are formed, a moving image in which the mouth is opened and closed can be obtained. Furthermore, instead of representing the movement of the lip by the lip images 111 to 113, a moving image of the lip may be formed by at least one lip image and data indicating the movement, and the moving image of the lip may be combined with the face image.”)
Claims 14-16 recite similar limitations of claims 2-4 respectively, thus are rejected accordingly.
Claims 17-22 recite similar limitations of claims 2-7 respectively, thus are rejected accordingly.
Claim(s) 8, 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshiro in view of Scapel and further in view of Smith et al. (5870138).
Regarding claim 8, Yoshiro in view of Scapel teaches:
The method of generating a sticker of claim 1, wherein determining the global positions of the components based on the material images comprises:
However, Yoshiro in view of Scapel does not, but Smith teaches:
determining bounding matrixes of the components in the material images; and determining the global positions based on the bounding matrixes.(para 53: “and the mouth position detection stage (3110-3114) share several common tasks, namely XY Projection, find max and search for bounding box.”)
Yoshiro in view of Scapel teaches finding mouth position in an image, but does not teaches the specific method of doing it. Smith teaches the specific method of doing it.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Yoshiro in view of Scapel with the specific method of Smith to effectively finding the component position.
Claim 23 recites similar limitations of claim 8, thus is rejected accordingly.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YANNA WU whose telephone number is (571)270-0725. The examiner can normally be reached Monday-Thursday 8:00-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YANNA WU/Primary Examiner, Art Unit 2615