DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5-7, 14-16, 20, 21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 5 recites “a first feature encoder” in line 3. The limitation was already recited in line 2 of claim 1. It is unclear to the Examiner whether the limitation in line 3 of claim 5 is the same or different from the limitation in line 2 of claim 1.
Claim 6 recites “a first feature encoder” in line 4. The limitation was already recited in line 2 of claim 1. It is unclear to the Examiner whether the limitation in line 4 of claim 6 is the same or different from the limitation in line 2 of claim 1.
Claim 6 recites “a second feature encoder” in lines 8-9. The limitation was already recited in lines 6-7 of claim 1. It is unclear to the Examiner whether the limitation in lines 8-9 of claim 6 is the same or different from the limitation in lines 6-7 of claim 1.
Claim 7 recites “cross-iteratively training” in line 7. The limitation was already recited in line 3. It is unclear to the Examiner whether the limitation in lines 7 is the same or different from the limitation in line 3. The Examiner suggests amending the limitation in line 7 to read as “the cross-iteratively training”.
Claim 7 recites “adjusting parameters” in line 21. The limitation was already recited in line 13. It is unclear to the Examiner whether the limitation in line 21 is the same or different from the limitation in line 13.
Claims 14-16 recite similar limitations as claims 5-7, respectively. Therefore, claims 14-16 require similar corrections as claims 5-7, respectively.
Claims 20, 21 recite similar limitations as claims 5, 6, respectively. Therefore, claims 20, 21 require similar corrections as claims 5, 6, respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 9, 10, 11, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (U.S. Patent Application 20210358190) in view of Yada et al. (U.S. Patent Application 20200356591) and further in view of Zhang et al. (U.S. Patent Application 20220084270).
In regards to claim 1, Choi teaches a method for generating character style profile image [e.g. method for generating an image of a virtual item applied to an avatar, 0008], comprising:
inputting an original character profile image [e.g. inputting the image to the user device, 0095] to obtain first character profile feature code [e.g. detecting objects related to the plurality of actual items by display pixel analysis using elements such as continuous border lines and color/texture/pattern change in the image, 0095];
determining an attribute increment between the original character profile image and a template image [e.g. determining a style attribute between the feature representations of the image of the target item and the feature representations of the template, 0117-0121];
inputting the attribute increment and the first character profile feature code [e.g. inputting the style attribute and the target item, 0125-0127] to obtain a second character profile feature code [Fig. 6; e.g. to generate a virtual style image, 0126];
inputting the second character profile feature code [e.g. inputting a virtual style image, 0125-0129] into a style profile generative model [e.g. generative adversarial network, 0126] to obtain an initial character style profile image [Fig. 6; e.g. to generate the virtual item, 0125-0129]; and
merging the initial character style profile image to obtain a target character style profile image [Fig. 7; e.g. applying the virtual item to the avatar to obtain a final avatar, 0131-0133, also see 0087].
Choi does not explicitly teach
inputting an original character profile image into a first feature encoder to obtain first character profile feature code (emphasis added);
inputting the attribute increment and the first character profile feature code into a second feature encoder to obtain a second character profile feature code (emphasis added);
merging the initial character style profile image into the template image to obtain a target character style profile image (emphasis added).
However, Yada teaches
inputting an original character profile image [e.g. first input image, 0061] into a first feature encoder [e.g. first encoder, 0061] to obtain first character profile feature code [e.g. first latent variable vector, 0061];
inputting the attribute increment [e.g. first latent variable vector, 0056] and the first character profile feature code [e.g. second latent variable vector, 0056] into a second feature encoder [e.g. mixer, 0056] to obtain a second character profile feature code [e.g. mixed latent variable vector, 0056];
Therefore, it would have been obvious to one of ordinary skill in the art to have modified Choi’s method with the features of
inputting an original character profile image into a first feature encoder to obtain first character profile feature code;
inputting the attribute increment and the first character profile feature code into a second feature encoder to obtain a second character profile feature code;
in the same conventional manner as taught by Yada because encoders are well known and commonly used in artificial intelligence systems, especially generative artificial intelligence systems [0039].
Choi as modified by Yada does not explicitly teach
merging the initial character style profile image into the template image to obtain a target character style profile image (emphasis added).
However, Zhang teaches
merging the initial character style profile image into the template image [e.g. fusing the target garment deformation image, the second human body template image, the garment deformation template image and the first image, 0139] to obtain a target character style profile image [e.g. to obtain the second image including the human body wearing the target garment, 0139, also see 0053].
Therefore, it would have been obvious to one of ordinary skill in the art to have modified the combination of Choi’s method and the teachings of Yada with the features of
merging the initial character style profile image into the template image to obtain a target character style profile image
in the same conventional manner as taught by Zhang because Zhang provides a fusion network that can generate a second image with higher fidelity [0016, 0024].
In regards to claim 9, the claim recites similar limitations as claim 1, but in the form of an electronic device comprising: at least one processing apparatus; a storage apparatus configured to store at least one program; when the at least one program is executed by the at least one processing apparatus, the at least one processing apparatus implements the method of claim 1. Furthermore, Choi teaches an electronic device [Fig. 2A; e.g. user device, 0066] comprising: at least one processing apparatus [Fig. 2A; e.g. processor, 0066]; a storage apparatus [Fig. 2A; e.g. memory, 0066] configured to store at least one program [e.g. program, 0196]; when the at least one program is executed by the at least one processing apparatus [0196], the at least one processing apparatus implements the method of claim 1. Therefore, the same rationale as claim 1 is applied.
In regards to claim 10, the claim recites similar limitations as claim 1, but in the form of a non-transitory computer-readable medium having a computer program stored thereon, when executed by a processing apparatus, the computer program implements the method of claim 1. Furthermore, Choi teaches a non-transitory computer-readable medium [Fig. 2A; e.g. memory, 0066] having a computer program [e.g. program, 0196] stored thereon, when executed by a processing apparatus [Fig. 2A; e.g. processor, 0066], the computer program implements the method of claim 1. Therefore, the same rationale as claim 1 is applied.
Claim(s) 2, 11, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (U.S. Patent Application 20210358190) in view of Yada et al. (U.S. Patent Application 20200356591) and further in view of Zhang et al. (U.S. Patent Application 20220084270) as applied to claims 1, 9, 10 above, and further in view of Beaver et al. (U.S. Patent Application 20150006313).
In regards to claim 2, Choi as modified by Yada and Zhang does not explicitly teach the method of claim 1, wherein the merging the initial character style profile image into the template image to obtain a target character style profile image comprises:
translating a position of a character style profile in the initial character style profile image; and
merging the initial character style profile image after translation into the template image to obtain the target character style profile image.
However, Beaver teaches the method of claim 1, wherein the merging the initial character style profile image into the template image to obtain a target character style profile image [see rejection of claim 1 above] comprises:
translating a position of a character style profile in the initial character style profile image [e.g. the newly selected attribute may be re-located to a different position in the product image, 0112, also see 0107]; and
merging the initial character style profile image after translation into the template image to obtain the target character style profile image [e.g. The merging step was already taught by Zhang so the translated initial character style profile image of Beaver is merged into the template image using Zhang’s merging method, see 0053, 0139 of Zhang and 0107, 0112 of Hold-Geoffroy].
Therefore, it would have been obvious to one of ordinary skill in the art to have modified the combination of Choi’s method and the teachings of Yada and Zhang with the features of
translating a position of a character style profile in the initial character style profile image; and
merging the initial character style profile image after translation into the template image to obtain the target character style profile image
in the same conventional manner as taught by Beaver because translating features of an image is well known and commonly used in the art of image processing systems.
In regards to claim 11, the claim recites similar limitations as claim 2. Therefore, the same rationale as claim 2 is applied.
In regards to claim 17, the claim recites similar limitations as claim 2. Therefore, the same rationale as claim 2 is applied.
Allowable Subject Matter
Claims 3, 4, 12, 13, 18, 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
In regards to claim 3, the prior art of record fails to teach or suggest the method of claim 2, wherein the translating a position of a character style profile in the initial character style profile image comprises:
obtaining a vertical standard line and a horizontal standard line of the initial character style profile image;
extracting a central key point and a mouth corner key point of the character style profile in the initial character style profile image;
determining a distance difference between a vertical coordinate of the central key point and the vertical standard line, and determining the distance difference between the vertical coordinate of the central key point and the vertical standard line as a first distance difference;
determining a distance difference between a horizontal coordinate of the mouth corner key point and the horizontal standard line, and determining the distance difference between the horizontal coordinate of the mouth corner key point and the horizontal standard line as the second distance difference; and
translating the character style profile along a vertical direction according to the first distance difference, and translating the character style profile along a horizontal direction according to the second distance difference.
In regards to claim 4, the prior art of record fails to teach or suggest the method of claim 1, wherein the merging the initial character style profile image into the template image to obtain a target character style profile image comprises:
recognizing a template character profile in the template image to obtain a recognition rectangle box;
cropping the initial character style profile image into an image of a set size according to the recognition rectangle box;
pasting the image of the set size into the recognition rectangle box;
obtaining a character profile mask image of the template image; and
merging the image of the set size pasted into the recognition rectangle box into the template image based on the character profile mask image, to obtain the target character style profile image.
In regards to claims 12, 13, the claims recite similar limitations as claims 3, 4, respectively. Therefore, the claims 12, 13 are allowable for at least the same reasons as claims 3, 4, respectively, if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
In regards to claims 18, 19, the claims recite similar limitations as claims 3, 4, respectively. Therefore, the claims 18, 19 are allowable for at least the same reasons as claims 3, 4, respectively, if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claims 5-7, 14-16, 20, 21 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
In regards to claim 5, the prior art of record fails to teach or suggest the method of claim 1, wherein a way for training the first feature encoder comprises:
obtaining a character profile sample image;
inputting the character profile sample image into a first feature encoder to be trained to obtain a first sample character profile feature code;
inputting a first sample character profile feature code into a character profile generative model to obtain a first reconstructed character profile image; and
training the first feature encoder to be trained based on a loss function between the first reconstructed character profile image and the character profile sample image to obtain the first feature encoder.
In regards to claim 6, the prior art of record fails to teach or suggest the method of claim 1, wherein a way for training the second feature encoder comprises:
obtaining a character profile sample image;
inputting the character profile sample image into the first feature encoder to obtain a second sample character profile feature code;
inputting the second sample character profile feature code into a character profile generative model to obtain a second reconstructed character profile image;
inputting the second sample character profile feature code and a real attribute increment into a second feature encoder to be trained to obtain a third sample character profile feature code;
inputting the third sample character profile feature code into the character profile generative model to obtain an edited character profile image;
determining a predictive attribute increment between the second reconstructed character profile image and the edited character profile image; and
training the second feature encoder to be trained based on a loss function between the predictive attribute increment and the real attribute increment to obtain the second feature encoder.
In regards to claim 7, the prior art of record fails to teach or suggest the method of claim 1, wherein a way for training the style profile generative model comprises:
cross-iteratively training a character profile generative model and a character profile discriminative model until an accuracy of a discrimination result output by the character profile discriminative model meets a set condition, and determining the trained character profile generative model as the style profile generative model;
a process of cross-iteratively training comprises:
obtaining a set style character profile sample image;
inputting first random noise data into the character profile generative model to obtain a first style character profile image;
inputting the first style character profile image and the set style character profile sample image into the character profile discriminative model to obtain a first discrimination result;
adjusting parameters in the character profile generative model based on the first discrimination result;
inputting second random noise data into the adjusted character profile generative model to obtain a second style character profile image;
inputting the second style character profile image and the set style character profile sample image into the character profile discriminative model to obtain a second discrimination result, and determining a real discrimination result between the second style character profile image and the set style character profile sample image; and adjusting parameters in the character discriminative model according to a loss function between the second discrimination result and the real discrimination result.
In regards to claims 14-16, the claims recite similar limitations as claims 5-7, respectively. Therefore, the claims 14-16 are allowable for at least the same reasons as claims 5-7, respectively, if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
In regards to claims 20, 21, the claims recite similar limitations as claims 5, 6, respectively. Therefore, the claims 20, 21 are allowable for at least the same reasons as claims 5, 6, respectively, if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW SHIN whose telephone number is (571)270-5764. The examiner can normally be reached Monday - Friday from 11:00AM to 7:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW SHIN/Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612