DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/14/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2-4, 10 and 13-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tong et al. (US 20170069124 A1) hereinafter referred to as Tong, in view of Qiu et al. (WO 2023/279704 A1) hereinafter referred to as Qiu.
Claim 1. (Original) A video image processing method, comprising:
in response to an effect triggering operation (Tong, [0025] facial expression and head pose tracker 102 may be configured to output a plurality animation message to drive animation of an avatar, based on the determined facial expressions and head poses of the user), displaying a target virtual object model (Tong, [0074] the avatar is the target virtual object. The avatar is being displayed), and acquiring an image to be processed containing a target object (Tong, [0024] receive an image 118 of a user having a face of the user, e.g., from image capturing device 114, such as, a camera, analyze the image for a number of facial and related components. Target object is the user’s image), wherein the target virtual object model is played according to a preset basic animation effect (Tong, [0034] animation message generation function 126 may be configured to selectively output animation messages 120 to drive animation of an avatar, based on the facial expression and head pose parameters depicting facial expressions and head poses of the user. Animation message 120 may specify a number of animations, such as “lower lip down” (LLIPD), “both lips widen” (BLIPW), “both lips up” (BLIPU), “nose wrinkle” (NOSEW), “eyebrow down” (BROWD), and so forth. The avatar is animated based on the preset messages that convert the extracted facial expression);
determining at least one overlay animation effect being triggered according to a face image in the image to be processed (Tong, [0095] [0108] a facial expression tracker to be operated by the processor to receive one or more additional images of a user; analyze the one or more additional images to identify facial expressions or head poses of the user; and generate a plurality of animation messages having a plurality of facial expression or head pose parameters that describe the facial expressions or head poses);
Tong does not explicitly disclose overlaying the at least one overlay animation effect for the target virtual object model to obtain a target video frame and display the target video frame.
Qiu discloses overlaying the at least one overlay animation effect for the target virtual object model to obtain a target video frame and display the target video frame (Qiu, Fig.1A S105, Pages 2, 6, 8 then performs capture on limbs of the real live streamer contained in the video image, so as to obtain posture information of the real live streamer; after the posture information is determined, a corresponding driving signal can be generated, wherein the driving signal is used for driving the display of an animation special effect corresponding to the virtual live-streamer model in a video live-streaming picture; and an animation special effect matching the posture information is determined, and a matched animation special effect is used as a target animation special effect, wherein the target animation special effect comprises a special effect of virtual limbs corresponding to limb parts to be recognized executing corresponding limb actions of a "Rabbit Princess", for example, the hands and arms of a "Rabbit Princess" can execute an "OK pose"; in addition, a corresponding sticker special effect can also be displayed in the video live-streaming picture, for example, material and special effects, such as "love stickers", can be displayed at designated positions of the video live-streaming picture. On the basis of realizing switching between face capture and limb capture, the trigger content of animation special effects is enriched, thereby improving the live-streaming experience of users).
It would have been obvious to one ordinary skilled in the art before the filing of the claimed invention to combine the teachings of Tong with the teachings of Qiu since they are both analogous in virtual reality related field.
One ordinary skilled in the art before the filing of the claimed invention would have been motivated to combine the teachings of Tong with the teachings of Qiu in order to guarantee the stability of the picture captured by the visual motion capture solution.Claims 13 and 14 essentially recites the same limitations as claim 1. Therefore, the rejection of claim 1 is applied to claim 13 and 14.Claims 2 and 15. (Original) The method according to claim 1, wherein the effect triggering operation comprises at least one of the following: triggering an effect prop corresponding to the target virtual object model; encompassing the face image in a detected view field region (Tong, [0025] facial expression and head pose tracker 102 may be configured to output a plurality animation message to drive animation of an avatar, based on the determined facial expressions and head poses of the user. The face is detected in a camera field of view and analyzed). Same rationale as claim 1Claims 3 and 16. (Original) The method according to claim 1, wherein the displaying a target virtual object model and acquiring an image to be processed containing a target object comprises: retrieving the target virtual object model corresponding to the effect triggering operation (Tong, [0115] receive an image having a face of a user; analyze the image to identify various facial and related components of the user; access an avatar database to identify corresponding artistic renditions for the various facial and related components stored in the database; and combine the corresponding artistic renditions for the various facial and related components to form an avatar, without user intervention), and controlling the target virtual object model to play according to the basic animation effect (Tong, [0126] animating avatar); acquiring the image to be processed containing the target object based on a camera apparatus deployed on a terminal device (Tong, [0024] receive an image 118 of a user having a face of the user, e.g., from image capturing device 114, such as, a camera, analyze the image for a number of facial and related components. Target object is the user’s image). Same rationale as claim 1Claims 4 and 17. (Original) The method according to claim 1, wherein after the acquiring an image to be processed containing a target object, the method further comprises: using the target virtual object model as a foreground image (Tong, [0025] facial expression and head pose tracker 102 may be configured to output a plurality animation messages to drive animation of an avatar, based on the determined facial expressions and head poses of the user. These features “animation” is interpreted as foreground) and using the image to be processed as a background image (Tong, [0024] database is accessed to retrieve an avatar resembling features of user’s image. The mesh image is the background that will be wrapped with textured animation). Same rationale as claim 1Claim 10. (Original) The method according to claim 1, wherein the overlaying the at least one overlay animation effect for the target virtual object model to obtain a target video frame and display the target video frame comprises:
overlaying the at least one overlay animation effect with the basic animation effect of the target virtual object model to obtain a target virtual object model for performing the target effect, and displaying the target virtual object model (Qiu, Fig.1A S105, Pages 2, 6, 8 then performs capture on limbs of the real live streamer contained in the video image, so as to obtain posture information of the real live streamer; after the posture information is determined, a corresponding driving signal can be generated, wherein the driving signal is used for driving the display of an animation special effect corresponding to the virtual live-streamer model in a video live-streaming picture; and an animation special effect matching the posture information is determined, and a matched animation special effect is used as a target animation special effect, wherein the target animation special effect comprises a special effect of virtual limbs corresponding to limb parts to be recognized executing corresponding limb actions of a "Rabbit Princess", for example, the hands and arms of a "Rabbit Princess" can execute an "OK pose"; in addition, a corresponding sticker special effect can also be displayed in the video live-streaming picture, for example, material and special effects, such as "love stickers", can be displayed at designated positions of the video live-streaming picture. On the basis of realizing switching between face capture and limb capture, the trigger content of animation special effects is enriched, thereby improving the live-streaming experience of users);
wherein the target video frame comprises the target virtual object model for performing the target effect and the target object, the target object is a background image (Tong, [0024] database is accessed to retrieve an avatar resembling features of user’s image. The mesh image is the background that will be wrapped with textured animation) and the target virtual object model is a foreground image (Tong, [0025] facial expression and head pose tracker 102 may be configured to output a plurality animation messages to drive animation of an avatar, based on the determined facial expressions and head poses of the user. These features “animation” is interpreted as foreground). Same rationale as claim 1
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Tong and Qiu, in view of Hirota et al. (US 20170186237 A1) hereinafter Hirota.Claim 11, the combination of Tong and Qiu does not disclose the method according to claim 1, wherein when the acquiring comprises the image to be processed of the target object, the method further comprises: determining relative position information between the target object and a camera apparatus, so as to adjust display position information of the target virtual object model in the target video frame based on the relative position information.
Hirota discloses the image drawing unit 217 calculates a relative position and orientation of the virtual object with respect to the camera 201 based on the position and orientation of the camera 201 and the position and orientation of the virtual object. The image drawing unit 217 then multiplies the shape information about the virtual object by the relative position and orientation of the virtual object and the internal parameter matrix of the camera 201 to determine the display position of the virtual object in the image captured by the camera 201.
It would have been obvious to one ordinary skilled in the art before the filing of the claimed invention to combine the teachings of the combination of Tong and Qiu with the teachings of Hirota since they are all analogous in virtual reality related field.
One ordinary skilled in the art before the filing of the claimed invention would have been motivated to combine the teachings of the combination of Tong and Qiu with the teachings of Qiu in order to suppress errors in the processing for associating the map with the image captured by the camera cause changes in a display position of the virtual object
Allowable Subject Matter
Claims 5-9 and 18-21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claims 5 and 18. No prior art teaches the features “The method according to claim 1, wherein the determining at least one overlay animation effect being triggered according to a face image in the image to be processed comprises: determining the face image in the image to be processed based on an image segmentation model; determining a plurality of key points to be processed of at least one part in the face image, and determining a trigger parameter of the at least one part in the face image according to the plurality of key points to be processed; determining the at least one overlay animation effect based on at least one trigger parameter.Claims 6-9 depend on allowable claim 5 and are therefore allowable for the same reasons as claim 5. Claims 19-21 depend on allowable claim 18 and are therefore allowable for the same reasons as claim 18.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is as follows:
US 20180308276 A1 In accordance with embodiments of the present disclosure, a computer-implementable method may include receiving a two-dimensional image comprising a face of a subject, deforming a three-dimensional base head model to conform to the face in order to generate a three-dimensional deformed head model, deconstructing the two-dimensional image into three-dimensional components of geometry, texture, lighting, and camera based on the three-dimensional deformed head model, and generating a three-dimensional character from the two-dimensional image based on the deconstructing. Such method may also include animating the three-dimensional character based on the three-dimensional components and data associated with the three-dimensional deformed head model and rendering the three-dimensional character as animated based on the three-dimensional components and data associated with the three-dimensional deformed head model to a display device associated with an information handling system.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN MUSHAMBO whose telephone number is (571)270-3390. The examiner can normally be reached Monday-Friday (8:00AM-5:00PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARTIN MUSHAMBO/Primary Examiner, Art Unit 2615 01/31/2026