Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments filed February 2nd 2026 have been entered. Claims 1, 2, 4-9, 12, 13, 15-17, 19-21, 22 remain pending. Claims 3, 10, 11, 14, and 18 have been cancelled. Applicant’s amendments to claim 1 have overcome the 35 USC § 103 rejections previously set forth in the final office action mailed on November 30th 2025. However, new 35 USC § 103 rejections have been entered, with new references applied as necessitated by amendment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 12, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, H.; Zhu,W.; Jin, H.; Ma, Y. Automatic, Illumination-Invariant and Real-Time Green-Screen Keying Using Deeply Guided Linear Models. Symmetry 2021, 13, 1454. https://doi.org/10.3390/sym13081454 (hereinafter "Li") in view of Vlahos (US 2016/0048991 Al).
Regarding claims 1, 12, and 13 Li teaches
a green screen matting method, comprising:
acquiring a first image (Figure 1, page 3 lines 3-5);
inputting the first image into a target parameter prediction model, and acquiring a target parameter map based on the target parameter prediction model (sections 2.2 & 4), wherein the target parameter map comprises transparency adjustment parameters for at least part of pixels in the first image (Figure 2, sections 4.1, 4.3);
determining a target opacity map for a foreground image in the first image, based on the transparency adjustment parameters for the at least part of pixels (section 4.3, eq 10, figure 2); and
calculating the foreground image based on the target opacity map and a color value of the first image (Section 1, Page 1).
Li fails to teach wherein, determining a target opacity map for a foreground image in the first image comprising at least one of:
determining the opacity of the foreground image as a product of a target difference and a global matting smoothness parameter as the transparency adjustment parameter, wherein the target difference is the difference between the center color distance of the pixel and a global matting intensity parameter; or
determining the opacity of the foreground image as a ratio of a first difference and a second difference, the first difference is the difference between the center color distance of the pixel and a background adjustment parameter included in the transparency adjustment parameter, and the second difference is the difference between the foreground adjustment parameter and the background adjustment parameter included in the transparency adjustment parameter.
However Vlahos teaches wherein, determining a target opacity map for a foreground image in the first image comprising at least one of:
determining the opacity of the foreground image as a product of a target difference and a global matting smoothness parameter (paragraphs [0035]-[0040]) as the transparency adjustment parameter (paragraphs [0009], [0036], [0040]), wherein the target difference is the difference between the center color distance of the pixel and a global matting intensity parameter (paragraphs [0009], [0010], [0011], [0015], [0027]-[0036]);
Vlahos describes a determination of matte or transparency levels for a foreground image. This is analogous to determining the opacity of the foreground image. This determination involves calculating color difference between foreground pixels and pixels in a blurred image. The blurred image is generating using a selected blur window length. Because this length and the blurred image generated from it greatly affects the matting of the image, and the length is a set parameter independent of image information (global), they can be considered analogous to a global matting intensity parameter. This difference between a color difference and global matting intensity is then used to determine transparency levels for each pixels with an option to scale this transparency by a specified factor, which can be considered analogous to a global matting smoothness parameter and a transparency adjustment parameter.
Vlahos is considered analogous to the claimed invention as it is in the same field of computer graphics and image background and foreground segmentation. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Li and Vlahos in order to improve representation of an image’s details.
Regarding claim 2, Li in view of Vlahos teach the method of claim 1. Li further teaches wherein the transparency adjustment parameters comprise at least one of foreground adjustment parameters or background adjustment parameters (section 4.1). Li describes the use of both foreground and background confidence scores which can be considered analogous to adjustment parameters as they are further used in the training process.
Claim(s) 4, 15, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Vlahos and in further view of Zhu (CN 112330531 A).
Regarding claims 4, 15, and 19, Li in view of Vlahos teaches the method of claim 1 but fails to teach wherein the calculating the foreground image based on the target opacity map and a color value of the first image comprises:
acquiring a fusion opaque coefficient;
calculating a color value of the foreground image, based on the fusion opaque coefficient, the color value of the first image, and a color value of a background image.
However, Zhu teaches wherein the calculating the foreground image based on the target opacity map and a color value of the first image comprises:
acquiring a fusion opaque coefficient (Paragraphs 1-4 of Page 4);
calculating a color value of the foreground image, based on the fusion opaque coefficient, the color value of the first image, and a color value of a background image (Paragraph 4 of Page 4).
Zhu describes the determination of transparency as the fusion of weight coefficients, color information, and foreground and background image information. This can be considered analogous to the fusion opaque coefficient. Zhu is considered analogous to the claimed invention as it is in the same field of invention of green screen matting. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Li in view of Vlahos and Zhu to specify the implementation of a green screen matting method.
Claim(s) 5, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Vlahos and Zhu and in further view of Pettigrew (WO 98/11510).
Regarding claims 5, 16, and 20, Li in view of Vlahos and Zhu teach the method of claim 4, and Phoka further teaches wherein the acquiring a fusion opaque coefficient comprises: determining that the fusion opaque coefficient is 1, when a color value of G channel in the first image is less than or equal to a target color value (Sections 3 eq 9),
determining the fusion opaque coefficient based on a first color distance and a second color distance when the color value of the G channel in the first image is larger than the target color value (Section 3 & 4), and the second color distance is a distance from a background color average to the green (Section 2);
Phoka describes a color difference process (analogous to color distance) which takes a maximum of the red and blue color channels instead of an average. Substituting the maximum with an average is an obvious change to make to one of ordinary skill in the art. Phoka further describes determining an alpha value (analogous to the fusion opaque coefficient) based on the distance between the green and red/blue channels which is analogous to the distance between the background color average and the green.
Li in view of Vlahos, Phoka and Zhu fail to teach wherein the first color distance is a distance from the color value in the first image to a green limit boundary plane and the target color value is half of the sum of color values of R channel and B channel, and the green limit boundary is a plane determined when the color value of G channel is equal to the target color value.
However, Pettigrew teaches wherein the first color distance is a distance from the color value in the first image to a green limit boundary plane (page 3 line 23-28, page 9 lines 22-25) and the target color value is half of a sum of color values of R channel and B channel (page 23, lines 13-15), and the green limit boundary is a plane determined when the color value of G channel is equal to the target color value (Figure 6A). Pettigrew describes the use of a blue boundary plane. It would be obvious to one of ordinary skill in the art to substitute the blue boundary plane for a green boundary plane and combine this with the color distance calculation of Phoka. Pettigrew also describes calculation of the blue component as the sum of other color components, this can be considered analogous to the sum of the R and B channels. It is an obvious variation to halve these values.
Pettigrew is considered analogous to the claimed invention as it is in the same field of green screen technology. Therefore it would have been obvious to one of ordinary skill in the art to combine the teachings of Pettigrew with Li in view of Vlahos, Phoka and Zhu to specify methodology for calculations used in determining an opaque coefficient.
Claim(s) 6, 7, 17, 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Vlahos and in further view of Phoka, T., Jariyawattanarat, W., & Sudsang, A. (2017, February). Fine tuning for green screen matting. In 2017 9th International Conference on Knowledge and Smart Technology (KST) (pp. 317-322). IEEE. DOI: 10.1109/KST.2017.7886106 (hereinafter “Phoka”).
Regarding claims 6, 17, and 21, Li in view of Vahlos teaches the method of claim 1. Li further teaches wherein, before inputting the first image into a target parameter prediction model, and acquiring a target parameter map based on the target parameter prediction model, the method further comprises:
training an initial parameter prediction model based on sample information to obtain the target parameter prediction model (Figures 2 & 5, Section 5);
the sample information includes a plurality of sample images and a first parameter map corresponding to each sample image (Figure 5, Section 5), wherein the sample images include a foreground image, a background image and a random green screen image (Figure 4); the first parameter map comprises at least one of a parameter map determined based on an UV coordinate vector of the foreground image pixels, an UV coordinate vector of the random green screen image pixels, and center color distances of the random green screen image pixels; or, a parameter map determined based on an UV coordinate vector of the background image pixels, an UV coordinate vector of the random green screen image pixels; the random green screen image is obtain by fusing color channels of the foreground image and the background; the background image is obtained by superimposing random green on a real picture (Figures 5 & 7, Sections 4 & 5, Figure 5 & Figure 7 descriptions)
Li describes training a model using sample images from a sample dataset, this dataset also includes background and foreground images, composed images and alpha values for these images. This can be considered analogous to the first parameter map as the alpha and composed images necessitate some sort of mapping from the foreground and/or background images. Li also describes the use of textured images, a UV coordinate system is generally a means for texturing images, in this context representing a foreground. Li uses the 2D texturing of a background image, it would be an obvious variation to use the foreground image as well as the background image. Li also uses different green colors to generate this textured background. This is analogous to the UV coordinate vector of the random green screen image pixels and the superimposing of random green on a real picture. Li further describes fusion of the background and foreground image, this in conjunction with the texture generation of the background image is analogous to the superimposing of obtaining background image through superimposing random green on a real picture.
Li fails to teach, an UV coordinate vector of the background image pixels and center color distances of the random green screen pixels; the random green screen image is obtained by fusing color channels of the foreground image and the background image based on the opacity of the foreground image.
However, Phoka teaches the center color distance of the pixels and green screen image is obtained by fusing color channels of the foreground image and the background image based on the opacity of the foreground image (sections 3 & 4). It would have been obvious to one of ordinary skill in the art to utilize center color distance of pixels in the UV vector calculation and incorporate the alpha information into the fusing of images.
Regarding claim 7, Li in view of Vlahos and Phoka teach the method of claim 6, Li further teaches wherein the training an initial parameter prediction model based on sample information to obtain the target parameter prediction model comprises:
executing the following steps at least once to obtain the target parameter prediction model:
acquiring a target sample image from the plurality of sample images, inputting the target sample image into the initial parameter prediction model (section 2), and acquiring an output parameter map of the target sample image output by an initial image processing model (section 4.1 “trimap” is analogous to output parameter map);
determining a target loss function based on the output parameter map (section 4.2 computes loss based on tri-map samples) and the first parameter map corresponding to the target sample image (section 4, T_i is based on first parameters);
modifying the initial parameter prediction model based on the target loss function; wherein, the target loss function comprises at least one of:
a first loss function corresponding to foreground adjustment parameters (sections 4.1 and 4.2 f_i is foreground parameter);
or,
a second loss function corresponding to background adjustment parameters (sections 4.1 and 4.2 use background parameters in calculating T, which is input for the loss function).
Li describes a trimap which is generated based on previous R2CF output and the sample image. The previous output used to create the trimap includes background and foreground information which can be considered analogous to foreground and background adjustment parameters. The trimap is further used in calculating loss.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Vlahos and Phoka and in further view of Price (US 2018/0253865 A1).
Regarding claim 8, Li in view of Vlahos and Phoka teaches the method of claim 7, and Li further teaches wherein the modifying the initial parameter prediction model based on the target loss function includes:
determining a first opacity map of the foreground image of the target sample image based on the output parameter map (section 4.1);
determining a third loss function based on the first opacity map and the second opacity map (section 4);
modifying the initial parameter prediction model based on the target loss function and the third loss function (section 4 eq 8).
Li describes a tri-map which is based on the output of the R2CF model. This tri-map is of the foreground and background images of the sample image. This is analogous to the first opacity map of the foreground image based on the output parameter map.
Li fails to teach determining a second opacity map of the foreground image of the target sample image based on the first parameter map corresponding to the target sample image;
However, Price teaches determining a first opacity map of the foreground image of the target sample image based on the output parameter map;
determining a second opacity map of the foreground image of the target sample image based on the first parameter map corresponding to the target sample image;
determining a third loss function based on the first opacity map and the second opacity map;
modifying the initial parameter prediction model based on the target loss function and the third loss function (paragraph [0022]).
Price describes a process of training a model based on an initial tri-map associated with an image and matte which contains alpha values for each pixel (opacity map), calculating error of the matte by comparing it to another matte, and updating the model accordingly. This is analogous to the process of comparing two opacity maps in order to calculate loss and update the prediction model accordingly. Price is considered analogous to the claimed invention as it is in the same field of image manipulation of matting techniques. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to utilize the training system of Price in combination with the methods of Li in view of Vlahos and Phoka to implement a training process in the image matting method.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Vlahos and Phoka and Price and in further view of Fang (US 2018/0240257 A1).
Regarding claim 9, Li in view of Vlahos, Phoka and Price teach the method of claim 8. The combination of references fail to teach wherein the modifying the initial parameter prediction model based on the target loss function and the third loss function comprises:
performing weighted summation of the target loss function and the third loss function based on weights to obtain a total loss function, and
modifying the initial parameter prediction model based on the total loss function.
However, Fang teaches wherein the modifying the initial parameter prediction model based on the target loss function and the third loss function comprises:
performing weighted summation of the target loss function and the third loss function based on weights to obtain a total loss function, and
modifying the initial parameter prediction model based on the total loss function (paragraph [0026]). Fang describes a weighted loss function used for training a neural network and is considered analogous to the claimed invention as it is in the same field of digital image manipulation. It would have been obvious to one of ordinary skill in the art to implement a weighted loss function into the prediction model of Li in view of Vlahos, Phoka, and Price to improve prediction performance.
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Vlahos and in further view of Phoka and Xu (WO 2021227838 A1).
Regarding claim 22, Li in view of Vlahos teaches the method of claim 1 wherein the determining a target opacity map for the foreground image in the first image, based on the transparency adjustment parameters for the at least part of pixels and center color distances of the at least part of pixels comprises:
determining an initial opacity map for the foreground image in the first image (section 2.2), based on the transparency adjustment parameters for the at least part of pixels (section 2.2) and;
performing guide filtering on the initial opacity map (Section 4.3).
Li fails to teach center color distances of the at least part of pixels.
However, Phoka teaches center color distances of the at least part of pixels (Abstract, Section 3). Phoka describes color difference which is analogous to the color distance. Both Li and Phoka are considered analogous to the claimed invention as they are both in the same field of green screen imaging technology. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Li in view of Vlahos and Phoka to implement an accurate green screen keying method.
Li in view of Phoka fails to teach performing guide filtering on the initial opacity map by taking a grayscale image of the first image as a guide image, to obtain the target opacity map.
However, Xu teaches performing guide filtering on the initial opacity map by taking a grayscale image of the first image as a guide image, to obtain the target opacity map (abstract, paragraph 3 of pg. 10). Xu is considered analogous to the claimed invention as it is in the same field of image processing. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Xu with Li in view of Vlahos and Phoka and utilize the guide filtering using a greyscale image as the guide image of Xu to improve image quality.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 and 10 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aidan W McCoy whose telephone number is (571)272-5935. The examiner can normally be reached 8:00 AM-5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AIDAN W MCCOY/Examiner, Art Unit 2611
/TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611