DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 13 and 24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The scope of the claim “computer-readable storage medium” is not clear based on the definitions provided in the Applicant’s Specification. Paragraphs 00163, 00171 define a “computer-readable signal medium” and “computer-readable storage medium” are using open ended terms (“for example,” “not limited to,” “may be a”). The Specification does attempt to make the distinction between the two types of computer-readable medium by stating “The computer-readable signal medium may also be any other computer-readable medium than the computer-readable storage medium.” However, the Specification still fails to clearly exclude any interpretation of the computer-readable storage medium being a transitory/signal medium by using terms such as “may also be.” The Examiner best believes the claim should recite “A non-transitory computer-readable medium”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5, 7, 12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (PGPUB Document No. US ) in view of Cao et al. (PGPUB Document No. US ).
Regarding claim 1, Liu teaches an image style transfer model training method, comprising:
Acquiring a first number of image samples, training a preset neural network model by using the image samples (The initial parameters of the first convolutional neural network 10 may also adopt trained parameters in an image database such as ImageNet or the like. (Liu: 0088)) to determine an image generation model and model parameters (the first convolutional neural network 10 includes a plurality of first convolutional kernels and a plurality of biases, the plurality of first convolutional kernels are first convolutional kernels and biases that are included in all the first convolutional layers of the first convolutional neural network 10, and the parameters of the neural network may include the plurality of first convolutional kernels and the plurality of biases (Liu: 0103, 0060))
Acquiring a second number of style image samples, training the image generation model (“the second training input image is a style image” (Liu: 0141)) by using the style image samples to determine style model parameters (a second convolutional layer 202 includes a second set of convolutional kernels and a second set of biases (Liu: 0060));
Determining transfer model parameters based on the portrait model parameters and the style model parameters (“step S50 may include adjusting a ratio between the plurality of first convolutional kernels and the plurality of biases according to the weight-bias-ratio loss value. In the process of modifying the parameters of the first convolutional neural network 10” (Liu: 0117))
(“the content loss function is used to calculate, based on the first training input feature of the first training input image and the first training output feature of the training output image, a content loss value of the parameters of the first convolutional neural network 10. The style loss function is used to calculate, based on the second training output feature of the training output image and the second training input feature of the second training input image, a style loss value of the parameters of the first convolutional neural network 10” (Liu: 0155))
And generating a first image style transfer model based on the transfer model parameters and the preset neural network model (the resulting image as shown in FIG.9C having content features of the first image and style image (Liu: 0190)).
However, Liu does not expressly teach but Cao teaches
The image being portrait images and (see images shown in FIG.4-7 of Cao), and the second number being less than the first number (high-quality target style conversion model can be generated based on unsupervised learning by using only a limited quantity of low-quality image samples. (Cao: 0036)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the teachings of Liu such as to implement the teachings of Cao, because “this resolves a current problem of difficulty in obtaining a large quantity of high-quality specified-style training pictures for animation generation tasks” (Cao: 0036).
Regarding claim 5, the combined teachings above do not expressly teach but Cao teaches the image style transfer model training method according to claim 1,
Wherein the second number of style image samples comprises a plurality of groups of the style image samples, one group of the style image samples corresponds to one image style (there are 10 different basic styles. In this case, the full style feature includes 10 different image style features (Cao: 0156));
The training the portrait image generation model by using the style image samples to determine style model parameters comprises:
Training the portrait image generation model by using the plurality of-groups of the style image samples, respectively, and determining style model parameters respectively corresponding to a plurality of image styles corresponding to the plurality of groups of the style image samples (5000 encoding vectors may be obtained. Then, encoding vectors having the same style may be averaged and summed to obtain 20 style features, to be combined into a full style feature (Cao: 015));
And the determining transfer model parameters based on the portrait model parameters and the style model parameters comprises:
Determining the transfer model parameters based on the portrait model parameters and the style model parameters respectively corresponding to the plurality of image styles (performing parameter fusion processing on the parameter of the function layer and the parameter of the adjustment reference layer according to the parameter fusion intensity of the function layer and the parameter fusion intensity of the adjustment reference layer to obtain the adjusted target style conversion model (Cao: 0152)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the teachings of Cao, because this enables added diversity, stability and efficiency for image style conversion.
Regarding claim 7, the combined teachings above do not expressly teach but Cao teaches an image style transfer method comprising:
Acquiring an image to be processed (step S100 acquiring a first image (Liu: 0183));
And inputting the image to be processed into a first image style transfer model to generate a target stylized image of the image to be processed (the to-be-converted image 107 are inputted into the target style conversion model 106, and style conversion processing is performed on the to-be-converted image 107 to obtain a target image 109 conforming to the target style (Cao: 0039));
Wherein the first image style transfer model is obtained based on the image style transfer model training method according to claim 1 (refer to rejection to claim 1 above).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to apply the training teachings of Cao to the combined teachings above, because this enable improved training efficiency.
Regarding claim 12, claim 12 is similar in scope to claims 1 and 7. Therefore, the rejections to claims 1 and 7 similarly applies to claim 12. Further, the combined teachings above teach an electronic device comprising a processor and memory, as presently claimed (Liu: 0255-0256).
Regarding claim 13, claim 13 is similar in scope to claims 1 and 7. Therefore, the rejections to claims 1 and 7 similarly applies to claim 13. Further, the combined teachings above teach a computer-readable storage medium, as presently claimed (Liu: 0255, 0258).
Allowable Subject Matter
Claims 2-6, 8 and 15-24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David H Chu whose telephone number is (571)272-8079. The examiner can normally be reached M-F: 9:30 - 1:30pm, 3:30-8:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H CHU/Primary Examiner, Art Unit 2616