DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 11/21/2025 have been fully considered but they are not persuasive. Applicant argues, on page 6 paragraph A, that Specification support for “processing,” “alignment,” and “geometric transformation” citing paragraphs [0114], [0099], [0107], and [0117]. The examiner respectfully disagrees. [0114] discloses sensor functions not processing. [0099] discloses animation not aligning. [0107] describes FIG. 5(a) and FIG. 5(b) not blending and adjusting. [0117] describes communication devices not merging. The specification does not support alignment or geometric transformation of the second face region.
Applicant argues, on page 7 paragraph B that Specification supports “2D warping” citing Fig. 14(a)-(b), and paragraphs [0100], and [0120]. The examiner respectfully disagrees. Fig. 14(a)-(b) describe warping of a ring not warping the region of the second face, paragraphs [0100] discloses warping of body not warping a face region, and [0120] describes user interface 1603 not adjusting face regions. The specification does not disclose 2D warping the region of the second face. Therefore, 112(a) rejection maintained.
Applicant argues, on page 8, that Huang does not teach or suggest the claimed muli-frame, pose-matched, expression-continuous face generation. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., pose-matched, expression-continuous face generation) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues, on page 9 first 2 paragraphs, that Huang does not support tracking facial pose across frames. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., tracking facial pose across frames) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues, on page 9 paragraph 1., that Huang does not use dynamic frame specific face generation. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., dynamic frame specific face generation) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues, on page .. paragraph 2., that Huang does not disclose track multi-frame pose, orientation, or expression of the second face. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., track multi-frame pose, orientation, or expression of the second face) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues, on page 10 paragraph 3. , that Hunag does not teach generating modified first face images corresponding to the second face image. However, Garrido section 6 teaches this feature. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant argues, on page 11 paragraph 4., that Huang does not provide continuous expression, orientation, or scaling across frames. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., continuous expression, orientation, or scaling across frames) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues, on page 11 paragraph 5., that Huang replacing the face region with the target face region does not constitute: generating new frames; generating modified face images; frame by frame synthesis or pose-matched reenactment. The examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., generating new frames;; frame by frame synthesis or pose-matched reenactment) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). generating modified face images is disclosed by Garrido. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant argues, on page 12 paragraph 6. , that Huang uses no 2D warping, no deformation, no transformation of the scene face region. However, Garrido section 6 teaches these features. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant argues, on page 13 paragraph 1., that Garrido is not a general purpose scene face swap system. The examiner respectfully disagrees. The seamless swap method of Garrido can be applied to modify Huang general purpose face swap to arrive at the argued applicant’s invention features.
Applicant argues, on page 13 paragraph 2., that Garrido does not process or warp the scene face region. However, Garrido teaches this feature. The specification does not support warping the scene face region as explained above in item 3.
Applicant argues, on page 14 paragraph 3., that Garrido does not modify the user face to match a sequence of arbitrary poses in a scene Video. The examiner respectfully disagrees. Garrido section3 and section 6 modify the user face to match the target frame sequence. Applicant argues further, that Garrido cannot Track second face regions within arbitrary scene videos, Handle non-actor scenes or complex backgrounds, Infer or match body orientation or perspective, Modify the user's face based on arbitrary scene face meta-information. The examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., Track second face regions within arbitrary scene videos, Handle non-actor scenes or complex backgrounds, Infer or match body orientation or perspective, Modify the user's face based on arbitrary scene face meta-information) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues, on page 14 paragraph 4., that Garrido Does Not Insert the User Face into the Target Scene. However, Huang teaches this feature. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant argues, on page 15 paragraph 5., that Applicant's Method Requires Full Region Replacement with Modified First Face Images. Hung teaches full region replacement with a face image, and Garrido teaches modifying the first face image to match the target image. Therefore, the combination of Huang and Garrido teaches the argued limitations.
Applicant argues, on page 15 paragraph 6, that Garrido Cannot Generate the "Seamless Single-Person Image" Result Required in Applicant's Claim. The examiner respectfully disagrees. the combination of Huang and Garrido teaches all the required recited claim limitations.
Applicant argues, on page 16 paragraph 7., that Garrido Cannot Be Combined with Huang Without Impermissible Hindsight. The examiner respectfully disagrees In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
Applicant argues, on page .17 first paragraph 7., that Seidel Does Not Supply the Missing Elements and Is Technically Incompatible With the Facial-Synthesis Pipeline Required by the Claim. Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant argues, on page 19 paragraph, that The Examiner's reasoning lacks an explicit rationale, such as: " shared technical goals,… The examiner respectfully disagrees. The rational of producing seamless image is considered a desired technical goal for combining Huang and Garrido. Also, provided a user-as-actor in a new movie is a desired technical goal for combining Gavade.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites “processing the face region of the second face image including alignment, geometric transformation, and/or 2D warping ” . The original specification does not support this limitation as explained in the response to arguments above items 2, 3. Dependent claims 2-20 are rejected for depending from claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 10-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over HUANG (US 20190005305 A1) in view of Garrido et al. (Automatic Face Reenactment, 2014 IEEE Conference on Computer Vision and Pattern Recognition), and in further view of Gavade (US 20120005595 A1)
Regarding claim 1, Huang teaches A method for generating a video using user image/video comprising:
providing a first face image/video (Huang [0032] the target face image is an image designated by a user) based on user input to a processor (Huang [0023] a device for processing a video);
detecting and extracting the user face region and face position information by the processor from the first face image/video frames (obvious from Huang [0025] the first face image in the M frames is replaced with a target face image. Note: It is obvious to one of ordinary skill in the art before the effective filing date of the current application that the replacement requires prior detection and extraction of the target face image);
receiving a scene video and a face position information of a second face image of a person in different frames of the scene video (Huang [0030] face features of each frame are extracted, Huang [0039] determine positions of facial features (such as eyes, eyebrows, the nose, the mouth, the outer contour of the face) based on the face recognition) from a data storage by the processor which comprises the second face image represent of a human with specified face region (Huang [0005] performing target recognition on each frame in an input video to obtain M frames containing a first face image);
Whereas face region is the space for face with/without neck portion (Huang [0039] the outer contour of the face) and/or hair in scene person's image or video frame,
Whereas face images is the space for face in person's image or video frame (Huang [0039] determine positions of facial features (such as eyes, eyebrows, the nose, the mouth, the outer contour of the face)), and
Wherein the face position information comprises a boundary of face region (Huang [0039] the outer contour of the face) and optionally at least one of tilt of face, orientation of face, geometrical location of face region, boundary of face region (Huang [0039] determine positions of facial features (such as eyes, eyebrows, the nose, the mouth, the outer contour of the face)) and zoom of the face region.
Huang et al.do not explicitly teach
modifying the first face image/ video frames to generate the modified first face images corresponding to the second face images in different frames of scene video by the processor 4) for maintaining a continuous facial expression along with orientation and scaling to match with the orientation and scaling of the second face images of the scene video frames;
warping the face region of the second face image by the processor 4) to match the second face image with the modified first face image of user image in different frames; and
replacing the warped face region of second face image in the scene video frames with the modified first face images with/without hair by the processor 4) in a way to generate seamless single person image in each frame and subsequently applying the process for all frames to create a seamless video
In a similar endeavor, Garrido et al. teach
modifying the first face image/ video frames to generate the modified first face images corresponding to the second face images in different frames of scene video by the processor 4) for maintaining a continuous facial expression along with orientation and scaling to match with the orientation and scaling of the second face images of the scene video frames (Garrido Section 6 we employ a 2D warping approach which combines global and local transformations to produce a natural shape deformation of the user’s face that matches the actor in the target sequence.);
warping the face region of the second face image by the processor 4) to match the second face image with the modified first face image of user image in different frames (Garrido section 3 step 3 The target head pose is transferred to the selected source frames by warping the facial landmarks. A smooth transition is created by synthesizing in-between frames, and blending the source face into the target sequence using seamless cloning, section 6 we employ a 2D warping approach which combines global and local transformations to produce a natural shape deformation of the user’s face that matches the actor in the target sequence) ; and
replacing the warped face region of second face image in the scene video frames with the modified first face images with/without hair by the processor 4) in a way to generate seamless single person image in each frame and subsequently applying the process for all frames to create a seamless video (Garrido section 2 the new face with different identity needs to be inserted as naturally as possible in the original video, Garrido section 3 step 3 The target head pose is transferred to the selected source frames by warping the facial landmarks. A smooth transition is created by synthesizing in-between frames, and blending the source face into the target sequence using seamless cloning.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the examined application to have n modified Huang et al. by incorporating Garrido et al. seamless swap to arrive at the invention.
The motivation of doing so would have produced seamless target image.
The combination of Huang et al. and Garrido et al. does not teach
Scene video from a data storage by the processor which comprises a body model of a person whereas body model of a person comprises the second face
In a similar endeavor, Gavade teaches
Scene video from a data storage by the processor which comprises a body model of a person (Gavade [0014] One scene of original movie 104-1 includes an actor 112 , Gavade [0024] Original content DB 212 may include a server to store content (e.g., video content) into which users may insert images and/or video of themselves (e.g., "original" content), ) whereas body model of a person comprises the second face (Gavade actor 112 in scene 104-1 in Fig. 1, Gavade [0014] the mixing engine may replace the images of the face of actor 112 in movie 104-1 with images of Mary's 106).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the examined application to have n modified the combination of Huang et al., Garrido et al., and by incorporating Gavade scene video to arrive at the invention.
The motivation of doing so would have provided a user-as-actor in a new movie..
Regarding claim 2, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to the claim 1,wherein scene video is pre-processed to provide the scene video where body model of person is with removed face region (obvious from Huang [0025] the first face image in the M frames is replaced with a target face image. Note: It is obvious to one of ordinary skill in the art before the effective filing date of the current application that replacing the face image requires removing the face region of the body model to replace it with user face).
Regarding claim 3, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, wherein extracting the face image comprises extracting the face cropped with neck from user image/video (Huang Fig. 2 first output frame image showing head and neck).
Regarding claim 4, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, wherein extracting the face image comprises extracting the face cropped with hair from user image/video (Huang Fig. 2 first output frame image showing head and hair).
Regarding claim 5, The combination of Huang et al., Garrido et al., and Gavade teaches the method according of to claim 1, wherein extracting the face image comprises extracting the region from user image, which includes face (Huang [0032] the face features of the each of the M frames are replaced with the face features of the target face image).
Regarding claim 6, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, wherein extracting the face image based on an extraction input, wherein the extraction input comprises selection of at least one of face, hair, neck (Huang Fig. 2 first output frame image showing head, hair, and neck), region around face.
Regarding claim 10, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, wherein the video with user image/s is processed with background image/video to generate processed video (Huang Fig. 3 showing first output frame image comprising the target image background)
Regarding claim 11, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, wherein the input scene video is provided with removed face with face position information of each frame (Huang [0032] the face features of the each of the M frames are replaced with the face features of the target face image, Huang [0034] By performing inverse alignment processing on a converted target face image, a face image with a face position coincident with that of the target face image).
Regarding claim 12, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, wherein at least a set of user images are provided which show same face in two slightly different perspective and are processed with a set of video which shows same scene in slightly different perspective to generate a set of video which show user image with body of person in scene video in slightly different perspective (Garrido page 1 col 2 we adapt the head pose and face shape of the selected source frames to match those of the target).
The motivation of doing so would have produced a proper composite of the user’s face in the target video sequence.
Regarding claim 13, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, comprising:
- receiving a face area Information (Huang [0044] a face region image corresponding to the target feature point set is obtained)comprising information of at least an area showing hair, head and/ or neck wearable, face wearable and an object covering face of body model, in different frames of scene video (Huang [0039 feature point locating is performed on the first face image in a first frame in the M frames to obtain a first feature point set).. the points in FIG. 4 represent locations of feature points of the face image, in which each feature point corresponds to one feature value);
- processing the extracted face image and the frames of the scene video using the information of face position information and the face area information (Huang [0044] face region image is synthesized with the first frame after the face-swap to obtain M frames after face-swap).
Regarding claim 14, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, wherein the scene video frames comprises at least a vehicle, a background, a helmet, hair (Garrido page 1 col 1 conserving the hair, face outline, and skin color, as well as the background and illumination of the target video).
The motivation of doing so would have produced a proper composite of the user’s face in the target video sequence.
Regarding claim 15, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, wherein providing a group photo or a single person video or group video based on user input, and selecting a face based on selection user input (Huang [0051] the preset face image base includes a plurality of types of face images, and at least one target face image may be selected from the preset face image base), processing the group photo or the single person video or the group video based on the selection user input to generate the user image/video with face. (Huang [p0051] When a plurality of target face images are determined, an instruction for designating an image for face-swap may be received, such that a target face image to be finally converted is determined, or the plurality of target face images may be all converted and then provided to the user for selecting).
Regarding claim 17, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, comprising:
- processing the scene video to elect the face region of body model of the person in scene video by the processor from the scene video frames (Huang [0038] feature point locating is performed on the first face image in a first frame in the M frames to obtain a first feature point set); and
- generate face position information (Huang [0039] determine positions of facial features (such as eyes, eyebrows, the nose, the mouth, the outer contour of the face)).
Regarding claim 18, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1 but does not teach comprising:
- receiving skin tone input related to skin tone (Garrido page 1 col 1 replace the actor’s inner face region, while conserving the hair, face outline, and skin color) or detecting skin tone information from the face of the user image/video;
- providing the video frames of the body model in matching skin tone either from a database based on the skin tone input/ skin tone information or by processing the body model skin colour based on the skin tone input/ skin tone information in scene video frames (Garrido section 6.2 The lighting of the target sequence, and the skin appearance and hair of the target actor, should be preserved).
The motivation of doing so would have produced a proper composite of the user’s face in the target video sequence.
Regarding claim 19, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1 comprises:
- merging of extracted face image/s with the body model of person at neck (Huang [0039] the outer contour of the face) in the scene video frame/s (Huang [0070] performing the face-swap between any frame containing the first face image and the target face image to obtain the first output frame, and performing the image synthesis on each of the extracted target feature point).
Regarding claim 20, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1 comprising:
- processing the extracted face in different frame/s of scene video with at least one of environment lighting, (Garrido section 6.2 The lighting of the target sequence, and the skin appearance and hair of the target actor, should be preserved), shading, overlay glass effect on/around the face
The motivation of doing so would have produced a proper composite of the user’s face in the target video sequence.
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over HUANG in view of Garrido et al, in further view of Gavade., and in further view of Seidel et al. (US 20130330060 A1).
Regarding claim 7, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, but does not teach
wherein the scene video is provided as per the body shape and/ or size information provided by the user.
In a similar endeavor, Seidel et al. teach
wherein the scene video is provided as per the body shape and/ or size information provided by the user (Seidel [0041] The inventive reshaping interface allows the user to generate a desired 3D target shape, Seidel [0062] simulate the desired appearance of the actor on screen, even if his true body shape and proportions do not match the desired look).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the examined application to have modified combination of Huang et al., Garrido et al., and Gavade by incorporating Seidel et al. body reshape to arrive at the invention
The motivation of doing so would have satisfied the user desire.
Regarding claim 8, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, but does not teach
wherein the person's body in scene video is reshaped to be in different shape and size.
In a similar endeavor Seidel et al. teach
wherein the person's body in scene video is reshaped to be in different shape and size (Seidel [0048] able to perform a large range of semantically guided body reshaping operations on video data of many different formats that are typical in movie and video production).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the examined application to have modified combination of Huang et al., Garrido et al., and Gavade by incorporating Seidel et al. body reshape to arrive at the invention
The motivation of doing so would have satisfied the user desire.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over HUANG in view of Garrido et al, in further view of Gavade., and in further view of Dreessen (US 20180357472 A1)
Regarding claim 9, The combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, but does not teach
wherein the scene video comprises a background, and processing the scene video to remove the background.
In a similar endeavor, Dreessen teaches
wherein the scene video comprises a background, and processing the scene video to remove the background ([0194] Editing the target video for comparison can include identifying and removing all or part of a background).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the examined application to have modified the combination of Huang et al., Garrido et al., and Gavade by incorporating Dreessen background removal to arrive at the invention.
The motivation of doing so would have produced a better-quality video.
. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over HUANG in view of Garrido et al, in further view of Gavade., and in further of Sharma et al. (US 7734070 B1).
Regarding claim 16, the combination of Huang et al., Garrido et al., and Gavade teaches the method according to claim 1, but does not teach
wherein the scene video comprises more than one body model of persons, the method comprising:
- providing one or more user image/video having one or more faces and selecting faces for body models based on user selection input .
In a similar endeavor, Sharma et al. teach
wherein the scene video comprises more than one body model of persons (Sharma col 5 lines 45-46 selection menu for the available replaceable actors’ images through said interaction interfaces), the method comprising:
- providing one or more user image/video having one or more faces and selecting faces (Sharma col 5 lines 48 the replacing users' images are decided) for body models based on user selection input (Sharma col 5 lines 43-45 the person can manually select the replaceable actor’s images).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the examined application to have n modified the combination of Huang et al., Garrido et al., and Gavade by incorporating Garrido et al. conserving environment lighting to arrive at the invention.
The motivation of doing so would have allowed the user to select desirable images.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAID M ELNOUBI whose telephone number is (571)272-9732. The examiner can normally be reached Monday-Friday 9:30AM to 6:00PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kathy Wang-Hurst can be reached at 571-270-5371. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAID M ELNOUBI/Examiner, Art Unit 2644