DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/26/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-8, 10-12, 14-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-17, 20 of U.S. Patent No. 12045914 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because recited claims 1-8, 10-12, 14-20 are an obvious variance of claims 1-17, 20 of U.S. Patent No. 12045914 B2 as shown below.
Current application
US 12045914 B2
Claim 1. An image coloring method based on artificial intelligence, performed by an electronic device and comprising: acquiring first color a priori information about a target image (being the image-to-be-colored); transforming the first color a priori information to obtain second color a priori information aligned with the target image; obtaining a first image feature based on the target image; performing modulation coloring processing on the first image feature based on the second color a priori information to obtain a second image feature; and obtaining a first colored image based on the second image feature and the second color a priori information, the first colored image being aligned with the target image.
Claim 1. An image coloring method based on artificial intelligence, performed by an electronic device and comprising: acquiring first color a priori information about an image-to-be-colored; transforming the first color a priori information to obtain second color a priori information aligned with the image-to-be-colored; downsampling the image-to-be-colored to obtain a first image feature; performing modulation coloring processing on the first image feature based on the second color a priori information to obtain a second image feature; and upsampling the second image feature based on the second color a priori information to obtain a first colored image, the first colored image being aligned with the image-to-be-colored.
2. The method according to claim 1, wherein the acquiring first color a priori information about the target image comprises: acquiring an encoding vector of the target image; performing identity mapping processing on the encoding vector to obtain a second colored image, wherein the second colored image is not aligned with the target image; using multi-scale features as the first color a priori information, wherein the multi-scale features are obtained during the identity mapping processing.
2. The method according to claim 1, wherein the acquiring first color a priori information about the image-to-be-colored comprises: acquiring an encoding vector of the image-to-be-colored; performing identity mapping processing on the encoding vector to obtain a second colored image, wherein the second colored image is not aligned with the image-to-be-colored; using multi-scale features as the first color a priori information, wherein the multi-scale features are obtained during the identity mapping processing.
3. The method according to claim 1, wherein the transforming the first color a priori information to obtain the second color a priori information aligned with the target image comprises: determining a similarity matrix between the target image and the second colored image, wherein the second colored image is obtained by coloring the target image and is not aligned with the target image; performing affine transformation processing on the first color a priori information based on the similarity matrix to obtain multi-scale features aligned with the target image, wherein the first color a priori information comprises the multi-scale features obtained during a process of coloring the target image; and using the multi-scale features aligned with the target image as the second color a priori information.
4. The method according to claim 1, wherein the transforming the first color a priori information to obtain the second color a priori information aligned with the image-to-be-colored comprises: determining a similarity matrix between the image-to-be-colored and the second colored image, wherein the second colored image is obtained by coloring the image-to-be-colored and is not aligned with the image-to-be-colored; performing affine transformation processing on the first color a priori information based on the similarity matrix to obtain multi-scale features aligned with the image-to-be-colored, wherein the first color a priori information comprises the multi-scale features obtained during a process of coloring the image-to-be-colored; and using the multi-scale features aligned with the image-to-be-colored as the second color a priori information.
4. The method according to claim 3, wherein the determining the similarity matrix between the target image and the second colored image comprises: acquiring a first position feature of the target image and a second position feature of the second colored image, wherein the first position feature comprises position features of pixel points in the target image, and the second position feature comprises position features of pixels point in the second colored image; and determining the similarity matrix between the target image and the second colored image based on the first position feature and the second position feature, wherein the similarity matrix comprises similarities between pixel points in the target image and pixel points in the second colored image.
5. The method according to claim 4, wherein the determining the similarity matrix between the image-to-be-colored and the second colored image comprises: acquiring a first position feature of the image-to-be-colored and a second position feature of the second colored image, wherein the first position feature comprises position features of pixel points in the image-to-be-colored, and the second position feature comprises position features of pixels point in the second colored image; and determining the similarity matrix between the image-to-be-colored and the second colored image based on the first position feature and the second position feature, wherein the similarity matrix comprises similarities between pixel points in the image-to-be-colored and pixel points in the second colored image.
5. The method according to claim 4, wherein the determining the similarity matrix between the target image and the second colored image based on the first position feature and the second position feature, comprises: performing non-local processing on the first position feature and the second position feature to obtain a similarity matrix corresponding to the non-local processing; and normalizing on the similarity matrix corresponding to the non-local processing to obtain the similarity matrix between the target image and the second colored image.
6. The method according to claim 5, wherein the determining the similarity matrix between the image-to-be-colored and the second colored image based on the first position feature and the second position feature, comprises: performing non-local processing on the first position feature and the second position feature to obtain a similarity matrix corresponding to the non-local processing; and normalizing on the similarity matrix corresponding to the non-local processing to obtain the similarity matrix between the image-to-be-colored and the second colored image.
12. The method according to claim 3, further comprising: performing conversion processing on the encoding vector to obtain a conversion vector; determining third color a priori information aligned with the target image based on the transformation vector; and performing modulation coloring processing on the target image based on the third color a priori information to obtain a third colored image aligned with the target image; wherein the third colored image comprises at least one of: an image obtained by coloring a background in the target image, an image obtained by coloring a foreground in the target image, or an image obtained by adjusting a saturation of the target image.
3. The method according to claim 2, further comprising: performing conversion processing on the encoding vector to obtain a conversion vector; determining third color a priori information aligned with the image-to-be-colored based on the transformation vector; and performing modulation coloring processing on the image-to-be-colored based on the third color a priori information to obtain a third colored image aligned with the image-to-be-colored; wherein the third colored image comprises at least one of: an image obtained by coloring a background in the image-to-be-colored, an image obtained by coloring a foreground in the image-to-be-colored, or an image obtained by adjusting a saturation of the image-to-be-colored.
6. The method according to claim 1, wherein the performing modulation coloring processing on the first image feature based on the second color a priori information to obtain the second image feature comprises: determining first modulation parameters based on the multi-scale features, aligned with the target image, in the second color a priori information; and performing modulation coloring processing on the first image feature via the first modulation parameters to obtain the second image feature.
7. The method according to claim 1, wherein the performing modulation coloring processing on the first image feature based on the second color a priori information to obtain the second image feature comprises: determining first modulation parameters based on the multi-scale features, aligned with the image-to-be-colored, in the second color a priori information; and performing modulation coloring processing on the first image feature via the first modulation parameters to obtain the second image feature.
7. The method according to claim 6, wherein the modulation coloring processing is achieved through a coloring network, and the coloring network comprises a residual module; and the determining the first modulation parameters based on the multi-scale features, aligned with the target image, in the second color a priori information comprises: determining a first scale feature, corresponding to the residual module in the coloring network, in the multi-scale features aligned with the target image; and performing convolution processing on the first scale feature to obtain the first modulation parameters corresponding to the residual module.
8. The method according to claim 7, wherein the modulation coloring processing is achieved through a coloring network, and the coloring network comprises a residual module; and the determining the first modulation parameters based on the multi-scale features, aligned with the image-to-be-colored, in the second color a priori information comprises: determining a first scale feature, corresponding to the residual module in the coloring network, in the multi-scale features aligned with the image-to-be-colored; and performing convolution processing on the first scale feature to obtain the first modulation parameters corresponding to the residual module.
8. The method according to claim 6, wherein the performing modulation coloring processing on the first image feature via the first modulation parameters to obtain the second image feature comprises: performing convolution processing on the first image feature to obtain a convolution result; performing first linear transformation processing on the convolution result via the first modulation parameters to obtain a first linear transformation result; and adding the first linear transformation result and the first image feature, and using an obtained addition result as the second image feature.
9. The method according to claim 7, wherein the performing modulation coloring processing on the first image feature via the first modulation parameters to obtain the second image feature comprises: performing convolution processing on the first image feature to obtain a convolution result; performing first linear transformation processing on the convolution result via the first modulation parameters to obtain a first linear transformation result; and adding the first linear transformation result and the first image feature, and using an obtained addition result as the second image feature.
10. The method according to claim 9, wherein the upsampling the second image feature based on the second color a priori information to obtain a first colored image comprises: determining second modulation parameters based on the multi-scale features, aligned with the target image, in the second color a priori information; performing deconvolution processing on the second image feature to obtain a deconvolution result; performing second linear transformation processing on the deconvolution processing result via the second modulation parameters to obtain a second linear transformation result; activating the second linear transformation result to obtain a predicted color image aligned with the target image; and performing color mode conversion processing on the predicted color image to obtain the first colored image.
10. The method according to claim 1, wherein the upsampling the second image feature based on the second color a priori information to obtain a first colored image comprises: determining second modulation parameters based on the multi-scale features, aligned with the image-to-be-colored, in the second color a priori information; performing deconvolution processing on the second image feature to obtain a deconvolution result; performing second linear transformation processing on the deconvolution processing result via the second modulation parameters to obtain a second linear transformation result; activating the second linear transformation result to obtain a predicted color image aligned with the image-to-be-colored; and performing color mode conversion processing on the predicted color image to obtain the first colored image.
11. The method according to claim 10, wherein modulation coloring processing is achieved through the coloring network, and the coloring network comprises an upsampling module; and the determining the second modulation parameters based on the multi-scale features aligned with the target image in the second color a priori information comprises: determining a second scale feature, corresponding to the upsampling module in the coloring network, from the multi-scale features aligned with the target image; and performing convolution processing on the second scale feature to obtain second modulation parameters corresponding to the upsampling module.
11. The method according to claim 10, wherein modulation coloring processing is achieved through the coloring network, and the coloring network comprises an upsampling module; and the determining the second modulation parameters based on the multi-scale features aligned with the image-to-be-colored in the second color a priori information comprises: determining a second scale feature, corresponding to the upsampling module in the coloring network, from the multi-scale features aligned with the image-to-be-colored; and performing convolution processing on the second scale feature to obtain second modulation parameters corresponding to the upsampling module.
14. The method according to claim 1, wherein a coloring network is used to obtain the first image feature, perform the modulation coloring processing, and obtain the first colored image; and the coloring network is trained by: determining a total loss function based on an adversarial loss function, a perception loss function, a domain alignment loss function and a context loss function corresponding to the coloring network; calling the coloring network to perform coloring processing on an image sample-to-be-colored to obtain a first colored image sample, a second colored image sample and a predicted color image sample, wherein the first colored image sample is obtained by converting the predicted color image sample and is aligned with the image sample-to-be-colored, and the second colored image sample is not aligned with the image sample-to-be-colored; determining an adversarial loss value based on an error between the predicted color image sample and a first actual colorful image corresponding to the predicted color image sample, determining a perception loss value based on an error between the second colored image sample and a second actual colorful image corresponding to the second colored image sample, determining a domain alignment loss value based on an error between the image sample-to-be-colored and the second colored image sample, and determining a context loss value based on an error between the first colored image sample and the second colored image sample, wherein the second actual colorful image is obtained by converting the first actual colorful image; performing a weighted summation on the adversarial loss value, the perception loss value, the domain alignment loss value and the context loss value to obtain a total loss value; and backward propagating the total loss value in the coloring network based on the total loss function, and updating parameters of the coloring network.
12. The method according to claim 1, wherein the downsampling, the modulation coloring processing and the upsampling processing are achieved through the coloring network; and the coloring network is trained by: determining a total loss function based on an adversarial loss function, a perception loss function, a domain alignment loss function and a context loss function corresponding to the coloring network; calling the coloring network to perform coloring processing on an image sample-to-be-colored to obtain a first colored image sample, a second colored image sample and a predicted color image sample, wherein the first colored image sample is obtained by converting the predicted color image sample and is aligned with the image sample-to-be-colored, and the second colored image sample is not aligned with the image sample-to-be-colored; determining an adversarial loss value based on an error between the predicted color image sample and a first actual colorful image corresponding to the predicted color image sample, determining a perception loss value based on an error between the second colored image sample and a second actual colorful image corresponding to the second colored image sample, determining a domain alignment loss value based on an error between the image sample-to-be-colored and the second colored image sample, and determining a context loss value based on an error between the first colored image sample and the second colored image sample, wherein the second actual colorful image is obtained by converting the first actual colorful image; performing a weighted summation on the adversarial loss value, the perception loss value, the domain alignment loss value and the context loss value to obtain a total loss value; and backward propagating the total loss value in the coloring network based on the total loss function, and updating parameters of the coloring network.
15. An image coloring apparatus based on artificial intelligence, comprising: a memory, configured to store executable instructions; and a processor, when executing the executable instructions stored in the memory, configured to perform: acquiring first color a priori information about an target image (being the image-to-be-colored); transforming the first color a priori information to obtain second color a priori information aligned with the target image; obtaining a first image feature based on the target image; performing modulation coloring processing on the first image feature based on the second color a priori information to obtain a second image feature; and obtaining a first colored image based on the second image feature and the second color a priori information, the first colored image being aligned with the target image.
13. An image coloring apparatus based on artificial intelligence, comprising: a memory, configured to store executable instructions; and a processor, when executing the executable instructions stored in the memory, configured to perform: acquiring first color a priori information about an image-to-be-colored; transforming the first color a priori information to obtain second color a priori information aligned with the image-to-be-colored; downsampling the image-to-be-colored to obtain a first image feature; performing modulation coloring processing on the first image feature based on the second color a priori information to obtain a second image feature; and upsampling the second image feature based on the second color a priori information to obtain a first colored image, the first colored image being aligned with the image-to-be-colored.
16. The apparatus according to claim 15, wherein the acquiring first color a priori information about the target image comprises: acquiring an encoding vector of the target image; performing identity mapping processing on the encoding vector to obtain a second colored image, wherein the second colored image is not aligned with the target image; using multi-scale features as the first color a priori information, wherein the multi-scale features are obtained during the identity mapping processing .
14. The apparatus according to claim 13, wherein the acquiring first color a priori information about the image-to-be-colored comprises: acquiring an encoding vector of the image-to-be-colored; performing identity mapping processing on the encoding vector to obtain a second colored image, wherein the second colored image is not aligned with the image-to-be-colored; using multi-scale features as the first color a priori information, wherein the multi-scale features are obtained during the identity mapping processing.
17. The apparatus according to claim 15, wherein the transforming the first color a priori information to obtain the second color a priori information aligned with the target image comprises: determining a similarity matrix between the target image and the second colored image, wherein the second colored image is obtained by coloring the target image and is not aligned with the target image; performing affine transformation processing on the first color a priori information based on the similarity matrix to obtain multi-scale features aligned with the target image, wherein the first color a priori information comprises the multi-scale features obtained during a process of coloring the target image; and using the multi-scale features aligned with the target image as the second color a priori information.
15. The apparatus according to claim 13, wherein the transforming the first color a priori information to obtain the second color a priori information aligned with the image-to-be-colored comprises: determining a similarity matrix between the image-to-be-colored and the second colored image, wherein the second colored image is obtained by coloring the image-to-be-colored and is not aligned with the image-to-be-colored; performing affine transformation processing on the first color a priori information based on the similarity matrix to obtain multi-scale features aligned with the image-to-be-colored, wherein the first color a priori information comprises the multi-scale features obtained during a process of coloring the image-to-be-colored; and using the multi-scale features aligned with the image-to-be-colored as the second color a priori information.
18. The apparatus according to claim 17, wherein the determining the similarity matrix between the target image and the second colored image comprises: acquiring a first position feature of the target image and a second position feature of the second colored image, wherein the first position feature comprises position features of pixel points in the target image, and the second position feature comprises position features of pixels point in the second colored image; and determining the similarity matrix between the target image and the second colored image based on the first position feature and the second position feature, wherein the similarity matrix comprises similarities between pixel points in the target image and pixel points in the second colored image.
16. The apparatus according to claim 15, wherein the determining the similarity matrix between the image-to-be-colored and the second colored image comprises: acquiring a first position feature of the image-to-be-colored and a second position feature of the second colored image, wherein the first position feature comprises position features of pixel points in the image-to-be-colored, and the second position feature comprises position features of pixels point in the second colored image; and determining the similarity matrix between the image-to-be-colored and the second colored image based on the first position feature and the second position feature, wherein the similarity matrix comprises similarities between pixel points in the image-to-be-colored and pixel points in the second colored image.
19. The apparatus according to claim 18, wherein the determining the similarity matrix between the target image and the second colored image based on the first position feature and the second position feature comprises: performing non-local processing on the first position feature and the second position feature to obtain a similarity matrix corresponding to the non-local processing; and normalizing on the similarity matrix corresponding to the non-local processing to obtain the similarity matrix between the target image and the second colored image.
17. The apparatus according to claim 16, wherein the determining the similarity matrix between the image-to-be-colored and the second colored image based on the first position feature and the second position feature comprises: performing non-local processing on the first position feature and the second position feature to obtain a similarity matrix corresponding to the non-local processing; and normalizing on the similarity matrix corresponding to the non-local processing to obtain the similarity matrix between the image-to-be-colored and the second colored image.
20. A non-transitory computer-readable storage medium, storing executable instructions, the executable instructions, when executed by a processor, causing the processor to perform: acquiring first color a priori information about an target image; transforming the first color a priori information to obtain second color a priori information aligned with the target image; obtaining a first image feature based on the target image; performing modulation coloring processing on the first image feature based on the second color a priori information to obtain a second image feature; and obtaining a first colored image based on the second image feature and the second color a priori information, the first colored image being aligned with the target image.
20. A non-transitory computer-readable storage medium, storing executable instructions, the executable instructions, when executed by a processor, causing the processor to perform: acquiring first color a priori information about an image-to-be-colored; transforming the first color a priori information to obtain second color a priori information aligned with the image-to-be-colored; downsampling the image-to-be-colored to obtain a first image feature; performing modulation coloring processing on the first image feature based on the second color a priori information to obtain a second image feature; and upsampling the second image feature based on the second color a priori information to obtain a first colored image, the first colored image being aligned with the image-to-be-colored.
Allowable Subject Matter
Claims 1-20 would be allowable if rewritten or amended to overcome the rejection(s) under Double Patenting, set forth in this Office action.
The following is a statement of reasons for the indication of allowable subject matter: no prior art discloses alone or in combination the italicized and bolded features.
Claim 1. An image coloring method based on artificial intelligence, performed by an electronic device and comprising: acquiring first color a priori information about a target image; transforming the first color a priori information to obtain second color a priori information aligned with the target image; obtaining a first image feature based on the target image; performing modulation coloring processing on the first image feature based on the second color a priori information to obtain a second image feature; and obtaining a first colored image based on the second image feature and the second color a priori information, the first colored image being aligned with the target image.Claims 2-14 depend on allowable claim 1 and are therefore allowable for the same reasons as claim 1.
Claim 15. An image coloring apparatus based on artificial intelligence, comprising: a memory, configured to store executable instructions; and a processor, when executing the executable instructions stored in the memory, configured to perform: acquiring first color a priori information about an target image; transforming the first color a priori information to obtain second color a priori information aligned with the target image; obtaining a first image feature based on the target image; performing modulation coloring processing on the first image feature based on the second color a priori information to obtain a second image feature; and obtaining a first colored image based on the second image feature and the second color a priori information, the first colored image being aligned with the target image.Claims 16-20 depend on allowable claim 15 and are therefore allowable for the same reasons as claim 15.relevant prior art:US 20210133430 A1 The imaginary face generation method involves obtaining a face color image and a face depth image frame by frame (510). Face region detection to the face color image is performed (520) to locate a face region of the face color image. A face region of the face depth image is normalized and color-transferred (530) into a normalized face depth image according to the face region of the face color image. The face color image and the normalized face depth image are superimposed (540) to generate a face mixed image. Face region detection and face landmark alignment are performed (550) to the face mixed images. A first face mixed image and the first virtual face mixed image are superimposed into an imaginary face, where the first face mixed image does not belong to the face mixed images.US 9478040 B2 A method and apparatus are provided for segmenting an object in an image. The method includes obtaining a first image including the object; receiving an input signal including information about a predetermined position in the first image; selecting at least one pixel included in the first image, based on the position information; generating a second image by dividing the first image into several areas, using the selected at least one pixel; and segmenting the object in the first image by using the first image and the second image.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN MUSHAMBO whose telephone number is (571)270-3390. The examiner can normally be reached Monday-Friday (8:00AM-5:00PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARTIN MUSHAMBO/ Primary Examiner, Art Unit 2615 03/21/2026