Prosecution Insights
Last updated: April 19, 2026
Application No. 18/079,826

FACE REGION BASED AUTOMATIC WHITE BALANCE IN IMAGES

Non-Final OA §102§103
Filed
Dec 12, 2022
Examiner
BONANSINGA, AARON TIMOTHY
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
19 granted / 25 resolved
+14.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
29 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/03/2025 has been entered. Response to Arguments Applicant’s arguments, filed 11/03/2025, with respect to the 1, 3, 5-9, 11, 13-16, 18 and 20 have been fully considered but are moot because the arguments do not apply to the current references and/or current combinations of references being used in the current rejection. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1, 5, 7, 9, 13, 16 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by LEWIS et al. (US 20140341442 A1), hereinafter referenced as LEWIS. Regarding claim 1, LEWIS explicitly teaches a computer-implemented method to adjust white balance in an image (Fig. 2. Paragraph [0026]-LEWIS discloses FIG. 2 is a flow diagram illustrating one example of a method 200 for providing a face image mask and using a face image mask to make adjustments in images. The filters can include white balance. Further in paragraph [0046]-LEWIS discloses FIG. 3 is a flow diagram illustrating a method 300 for determining a face image mask, such as a facial skin mask that indicates which pixels of an image depict a person's facial skin. Please also see Fig. 3, 5 and 11), the method comprising: detecting, by a processor (Fig. 11, #1102 called a processor. Paragraph [0080]. Please also read paragraph [0079]), a face in the image, wherein the face corresponds to a plurality of pixels (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region (wherein the image may contain a plurality of faces, and the facial regions may be determined separately and independently). Please also see Fig. 3 and 5, and read paragraph [0028-0029, 0063 and 0069]); determining a skin tone classification for the face using a skin tone classifier (Fig. 3. Paragraph [0033]-LEWIS discloses if the face portions in the image are to be processed, then the method continues to block 208, where a face image mask for the image is determined. In paragraph [0046]-LEWIS discloses FIG. 3 is a flow diagram illustrating a method 300 for determining a face image mask, such as a facial skin mask that indicates which pixels of an image depict a person's facial skin. Method 300 can be used in the method 200 of FIG. 2 to determine a face mask (wherein method 300 includes determining a skin tone classification). Further in paragraph [0063]-LEWIS discloses the method 300 can be performed separately and independently for each face region identified in the image. Each face region can have its own characteristic skin color determined and its own pixels selected for inclusion in the skin pixels for that face, and each face can have a different amount of processing performed. Please also see Fig. 3 and 5, and read paragraph [0032-0042, 0048-0053 and 0066-0070]) by: providing the plurality of pixels of the face as input to the skin tone classifier (Fig. 3. Paragraph [0048]-LEWIS discloses in block 304, the method converts the colors of each of the pixels within the face region obtained in block 302 to a particular color space. Please also read paragraph [0049-0053 and 0064-0069]); and outputting, with the skin tone classifier, a skin tone classification for the face (Fig. 3. Paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. The predetermined range of known skin colors can be defined as a predefined zone or area on a color space graph that indicates a desired range of known skin colors. In paragraph [0053]-LEWIS discloses in block 308, the method determines a characteristic skin color for the selected face region pixels. Please also see Fig. 2 and 5 and read paragraph [0035-0043, 0046-0049, and 0065-0069]); determining a set of skin tone region parameters based on the skin tone classification (Fig. 3. Paragraph [0053]-LEWIS discloses in block 310, the method determines information about the distribution of colors within the set of selected face region pixels. A standard deviation can be determined for each color channel. Please also see Fig. 2 and 5 and read paragraph [0035-0043, 0046-0049, and 0065-0069]); determining a region of interest (ROI) for the face by including each pixel with a pixel value in the set of skin tone region parameters and excluding each pixel with a pixel value outside the set of skin tone region parameters (Fig. 3. Paragraph [0054]-LEWIS discloses in block 312, the method can determine a spatial face area based on facial landmarks and/or other facial information received in block 302 (wherein block 312 may be a more accurate representation of the boundaries of the person's face than block 302, which identifies faces and associated information, such as facial landmarks). In paragraph [0057]-LEWIS discloses in block 314, a falloff area can be determined outside of the spatial face area determined in block 316. The falloff area can be a "feathered" area of the mask. The pixels in the falloff area are evaluated for similarity to the characteristic skin color. The falloff area can be a zone extending out from the spatial face area by a predetermined amount. In paragraph [0059]-LEWIS discloses in block 316, the method compares the pixels within the spatial face area and the falloff area to the characteristic skin color determined in block 308. Please also see Fig. 2 and 5 and read paragraph [0034-0035, 0049, 0053, and 0065-0069]); performing a face color calculation for the face based on the ROI for the face (Fig. 3. Paragraph [0059]-LEWIS discloses in block 318 the method designates particular mask pixels of the face mask to indicate facial skin pixels in the image. The designated mask pixels are those pixels corresponding to image pixels having a color within a threshold similarity to the characteristic skin color. Each of the three color channels of each pixel is checked to be within the threshold range. The threshold range may be based on the distribution of colors of the pixels selected to determine the characteristic color in block 308. The standard deviation of each color channel as determined in block 310 can be used as an indication of how wide is the color distribution in a channel, and the threshold range can be based on the standard deviation. Further in paragraph [0062]-LEWIS discloses after block 318 is performed, the resulting facial skin mask designates facial skin pixels in the spatial face area and the falloff area which are similar to the characteristic skin color in varying degrees. All other pixels in the facial skin mask are designated as non-facial skin pixels. Please also see Fig. 2 and 5 and read paragraph [0049, 0053, and 0065-0069]); and adjusting the white balance in the image based on the face color calculation to obtain an output image (Fig. 2. Paragraph [0042]-LEWIS discloses in block 218, the method applies one or more adjustments, such as one or more filters and/or other adjustments, to the selected non-face image pixels. In paragraph [0043]-LEWIS discloses the filters can include white balance of the pixels or depicted features not included in the determined face mask. The method can also perform processing on face mask pixels of the image in blocks 208-212 (wherein method 200 and the face mask may be generated in conjunction with methods 300 and 500, and the non-facial pixels designated for white balancing are determined based on the facial color calculation). Moreover, in paragraph [0045]-LEWIS discloses the processing in blocks 212 and/or 218 can be performed in parallel. The processing of the face pixels or facial skin pixels can be performed using one or more types of filters, and the processing of the non-face and/or non-facial skin pixels can be performed substantially or partially simultaneously using one or more other filters, allowing efficiency through parallelization). Regarding claim 5, LEWIS explicitly teaches the computer-implemented method of claim 1, LEWIS further teaches wherein determining the skin tone classification further comprises preprocessing the plurality of pixels of the face, wherein the preprocessing includes adjusting one or more statistics of the image (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region. In paragraph [0048]-LEWIS discloses in block 304, the method converts the colors of each of the pixels within the face region obtained in block 302 to a particular color space. In paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. The predetermined range of known skin colors can be defined as a predefined zone or area on a color space graph that indicates a desired range of known skin colors. In paragraph [0052]-LEWIS discloses the method can also bound the selected face region pixels by performing edge tests to further narrow the pixels in the selected set. The luminance of pixels can be compared to threshold values and pixels having a luminance below a lower threshold or above a higher threshold can be ignored and not selected. Such edge tests can also check if the saturation of the pixels is outside a predetermined threshold, and if so, remove such pixels from consideration) prior to providing the plurality of pixels of the face to the skin tone classifier (Fig. 3. Paragraph [0053]-LEWIS discloses in block 308, the method determines a characteristic skin color for the selected face region pixels. The characteristic skin color is the average color of the selected face region pixels. Please also see Fig. 2 and 5, and read paragraph [0035-0045 and 0054-0060]). Regarding claim 7, LEWIS explicitly teaches the computer-implemented method of claim 1, LEWIS further teaches wherein the image includes a plurality of faces and wherein determining the region of interest and performing the face color calculation are repeated for each of the plurality of faces (Fig. 5. Paragraph [0016]-LEWIS discloses the system can identify one or more face regions of an image that include pixels that depict faces or portions of faces. In paragraph [0065]-LEWIS discloses In block 502, the method receives facial information identifying a face region. In paragraph [0066]-LEWIS discloses in block 504, the method finds additional pixels outside the face region that have a color similar to the face region pixels, and connects those additional pixels to the face region. In paragraph [0067]-LEWIS discloses a color range in the face region can be determined (such as the average color). In paragraph [0069]-LEWIS discloses the method 500 can be performed separately and independently for each face region identified in the image). Regarding claim 9, LEWIS explicitly teaches a computing device (Fig. 11, #1100 called a device. Paragraph [0079]) comprising: a processor (Fig. 11, #1102 called a processor. Paragraph [0080]); and a memory (Fig. 11, #1104 called memory. Paragraph [0081]) coupled to the processor, with instructions stored thereon that, when executed by the processor, cause the processor to perform operations (Fig. 11. Paragraph [0081]-LEWIS discloses memory 1104 is typically provided in device 1100 for access by the processor 1102, and may be any suitable processor-readable storage medium, such as random access memory (RAM), read-only memory (ROM), Electrical Erasable Read-only Memory (EEPROM), Flash memory, etc., suitable for storing instructions for execution by the processor, and located separate from processor 1102 and/or integrated therewith. Memory 1104 can store software operating on the server device 1100 by the processor 1102, including an operating system 1108 and one or more applications engines 1110 such as a graphics editing engine, web hosting engine, social networking engine, etc. The applications engines 1110 can include instructions that enable processor 1102 to perform the functions described herein, e.g., some or all of the methods of FIGS. 2, 3 and/or 5) comprising: detecting a face in an image, wherein the face corresponds to a plurality of pixels (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region (wherein the image may contain a plurality of faces, and the facial regions may be determined separately and independently). Please also see Fig. 3 and 5, and read paragraph [0028-0029, 0063 and 0069]); determining a skin tone classification for the face using a skin tone classifier (Fig. 3. Paragraph [0033]-LEWIS discloses if the face portions in the image are to be processed, then the method continues to block 208, where a face image mask for the image is determined. In paragraph [0046]-LEWIS discloses FIG. 3 is a flow diagram illustrating a method 300 for determining a face image mask, such as a facial skin mask that indicates which pixels of an image depict a person's facial skin. Method 300 can be used in the method 200 of FIG. 2 to determine a face mask (wherein method 300 includes determining a skin tone classification). Further in paragraph [0063]-LEWIS discloses the method 300 can be performed separately and independently for each face region identified in the image. Each face region can have its own characteristic skin color determined and its own pixels selected for inclusion in the skin pixels for that face, and each face can have a different amount of processing performed. Please also see Fig. 3 and 5, and read paragraph [0032-0042, 0048-0053 and 0066-0070]) by: providing the plurality of pixels of the face as input to the skin tone classifier (Fig. 3. Paragraph [0048]-LEWIS discloses in block 304, the method converts the colors of each of the pixels within the face region obtained in block 302 to a particular color space. Please also read paragraph [0049-0053]); and outputting, with the skin tone classifier, a skin tone classification for the face (Fig. 3. Paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. The predetermined range of known skin colors can be defined as a predefined zone or area on a color space graph that indicates a desired range of known skin colors. In paragraph [0053]-LEWIS discloses in block 308, the method determines a characteristic skin color for the selected face region pixels. Please also see Fig. 2 and 5 and read paragraph [0035-0043, 0046-0049, and 0065-0069]); determining a set of skin tone region parameters based on the skin tone classification (Fig. 3. Paragraph [0053]-LEWIS discloses in block 310, the method determines information about the distribution of colors within the set of selected face region pixels. A standard deviation can be determined for each color channel. Please also see Fig. 2 and 5 and read paragraph [0035-0043, 0046-0049, and 0065-0069]); determining a region of interest (ROI) for the face by including each pixel with a pixel value in the set of skin tone region parameters and excluding each pixel with a pixel value outside the set of skin tone region parameters (Fig. 3. Paragraph [0054]-LEWIS discloses in block 312, the method can determine a spatial face area based on facial landmarks and/or other facial information received in block 302 (wherein block 312 may be a more accurate representation of the boundaries of the person's face than block 302, which identifies faces and associated information, such as facial landmarks). In paragraph [0057]-LEWIS discloses in block 314, a falloff area can be determined outside of the spatial face area determined in block 316. The falloff area can be a "feathered" area of the mask. The pixels in the falloff area are evaluated for similarity to the characteristic skin color. The falloff area can be a zone extending out from the spatial face area by a predetermined amount. In paragraph [0059]-LEWIS discloses in block 316, the method compares the pixels within the spatial face area and the falloff area to the characteristic skin color determined in block 308. Please also see Fig. 2 and 5 and read paragraph [0034-0035, 0049, 0053, and 0065-0069]); performing a face color calculation for the face based on the ROI for the face (Fig. 3. Paragraph [0059]-LEWIS discloses in block 318 the method designates particular mask pixels of the face mask to indicate facial skin pixels in the image. The designated mask pixels are those pixels corresponding to image pixels having a color within a threshold similarity to the characteristic skin color. Each of the three color channels of each pixel is checked to be within the threshold range. The threshold range may be based on the distribution of colors of the pixels selected to determine the characteristic color in block 308. The standard deviation of each color channel as determined in block 310 can be used as an indication of how wide is the color distribution in a channel, and the threshold range can be based on the standard deviation. Further in paragraph [0062]-LEWIS discloses after block 318 is performed, the resulting facial skin mask designates facial skin pixels in the spatial face area and the falloff area which are similar to the characteristic skin color in varying degrees. All other pixels in the facial skin mask are designated as non-facial skin pixels. Please also see Fig. 2 and 5 and read paragraph [0049, 0053, and 0065-0069])); and adjusting white balance in the image based on the face color calculation to obtain an output image (Fig. 2. Paragraph [0042]-LEWIS discloses in block 218, the method applies one or more adjustments, such as one or more filters and/or other adjustments, to the selected non-face image pixels. In paragraph [0043]-LEWIS discloses the filters can include white balance of the pixels or depicted features not included in the determined face mask. The method can also perform processing on face mask pixels of the image in blocks 208-212 (wherein method 200 and the face mask may be generated in conjunction with methods 300 and 500, and the non-facial pixels designated for white balancing are determined based on the facial color calculation). Moreover, in paragraph [0045]-LEWIS discloses the processing in blocks 212 and/or 218 can be performed in parallel. The processing of the face pixels or facial skin pixels can be performed using one or more types of filters, and the processing of the non-face and/or non-facial skin pixels can be performed substantially or partially simultaneously using one or more other filters, allowing efficiency through parallelization). Regarding claim 13, LEWIS explicitly teaches the computing device of claim 9, LEWIS further teaches wherein determining the skin tone classification further comprises preprocessing the plurality of pixels of the face, wherein the preprocessing includes adjusting one or more statistics of the plurality of pixels of the face (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region. In paragraph [0048]-LEWIS discloses in block 304, the method converts the colors of each of the pixels within the face region obtained in block 302 to a particular color space. In paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. The predetermined range of known skin colors can be defined as a predefined zone or area on a color space graph that indicates a desired range of known skin colors. In paragraph [0052]-LEWIS discloses the method can also bound the selected face region pixels by performing edge tests to further narrow the pixels in the selected set. The luminance of pixels can be compared to threshold values and pixels having a luminance below a lower threshold or above a higher threshold can be ignored and not selected. Such edge tests can also check if the saturation of the pixels is outside a predetermined threshold, and if so, remove such pixels from consideration) prior to providing the image to the skin tone classifier (Fig. 3. Paragraph [0053]-LEWIS discloses in block 308, the method determines a characteristic skin color for the selected face region pixels. The characteristic skin color is the average color of the selected face region pixels. Please also see Fig. 2 and 5, and read paragraph [0035-0045 and 0054-0060]). Regarding claim 16, LEWIS explicitly teaches a non-transitory computer-readable medium (Fig. 11. Paragraph [0079]-LEWIS discloses FIG. 11 is a block diagram of an example device 1100), with instructions stored thereon that, when executed by a processor, cause the processor to perform operations (Fig. 11. Paragraph [0081]-LEWIS discloses memory 1104 is typically provided in device 1100 for access by the processor 1102, and may be any suitable processor-readable storage medium, such as random access memory (RAM), read-only memory (ROM), Electrical Erasable Read-only Memory (EEPROM), Flash memory, etc., suitable for storing instructions for execution by the processor, and located separate from processor 1102 and/or integrated therewith) comprising: detecting a face in an image, wherein the face corresponds to a plurality of pixels (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region (wherein the image may contain a plurality of faces, and the facial regions may be determined separately and independently). Please also see Fig. 3 and 5, and read paragraph [0028-0029, 0063 and 0069]); determining a skin tone classification for the face using a skin tone classifier (Fig. 3. Paragraph [0033]-LEWIS discloses if the face portions in the image are to be processed, then the method continues to block 208, where a face image mask for the image is determined. In paragraph [0046]-LEWIS discloses FIG. 3 is a flow diagram illustrating a method 300 for determining a face image mask, such as a facial skin mask that indicates which pixels of an image depict a person's facial skin. Method 300 can be used in the method 200 of FIG. 2 to determine a face mask (wherein method 300 includes determining a skin tone classification). Further in paragraph [0063]-LEWIS discloses the method 300 can be performed separately and independently for each face region identified in the image. Each face region can have its own characteristic skin color determined and its own pixels selected for inclusion in the skin pixels for that face, and each face can have a different amount of processing performed. Please also see Fig. 3 and 5, and read paragraph [0032-0042, 0048-0053 and 0066-0070]) by: providing the plurality of pixels of the face as input to the skin tone classifier (Fig. 3. Paragraph [0048]-LEWIS discloses in block 304, the method converts the colors of each of the pixels within the face region obtained in block 302 to a particular color space. Please also read paragraph [0049-0053 and 0064-0069]); and outputting, with the skin tone classifier, a skin tone classification for the face (Fig. 3. Paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. The predetermined range of known skin colors can be defined as a predefined zone or area on a color space graph that indicates a desired range of known skin colors. In paragraph [0053]-LEWIS discloses in block 308, the method determines a characteristic skin color for the selected face region pixels. Please also see Fig. 2 and 5 and read paragraph [0035-0043, 0046-0049, and 0065-0069]); determining a set of skin tone region parameters based on the skin tone classification (Fig. 3. Paragraph [0053]-LEWIS discloses in block 310, the method determines information about the distribution of colors within the set of selected face region pixels. A standard deviation can be determined for each color channel. Please also see Fig. 2 and 5 and read paragraph [0035-0043, 0046-0049, and 0065-0069]); determining a region of interest (ROI) for the face by including each pixel with a pixel value in the set of skin tone region parameters and excluding each pixel with a pixel value outside the set of skin tone region parameters (Fig. 3. Paragraph [0054]-LEWIS discloses in block 312, the method can determine a spatial face area based on facial landmarks and/or other facial information received in block 302 (wherein block 312 may be a more accurate representation of the boundaries of the person's face than block 302, which identifies faces and associated information, such as facial landmarks). In paragraph [0057]-LEWIS discloses in block 314, a falloff area can be determined outside of the spatial face area determined in block 316. The falloff area can be a "feathered" area of the mask. The pixels in the falloff area are evaluated for similarity to the characteristic skin color. The falloff area can be a zone extending out from the spatial face area by a predetermined amount. In paragraph [0059]-LEWIS discloses in block 316, the method compares the pixels within the spatial face area and the falloff area to the characteristic skin color determined in block 308. Please also see Fig. 2 and 5 and read paragraph [0034-0035, 0049, 0053, and 0065-0069]); performing a face color calculation for the face based on the ROI for the face (Fig. 3. Paragraph [0059]-LEWIS discloses in block 318 the method designates particular mask pixels of the face mask to indicate facial skin pixels in the image. The designated mask pixels are those pixels corresponding to image pixels having a color within a threshold similarity to the characteristic skin color. Each of the three color channels of each pixel is checked to be within the threshold range. The threshold range may be based on the distribution of colors of the pixels selected to determine the characteristic color in block 308. The standard deviation of each color channel as determined in block 310 can be used as an indication of how wide is the color distribution in a channel, and the threshold range can be based on the standard deviation. Further in paragraph [0062]-LEWIS discloses after block 318 is performed, the resulting facial skin mask designates facial skin pixels in the spatial face area and the falloff area which are similar to the characteristic skin color in varying degrees. All other pixels in the facial skin mask are designated as non-facial skin pixels. Please also see Fig. 2 and 5 and read paragraph [0049, 0053, and 0065-0069]); and adjusting white balance in the image based on the face color calculation to obtain an output image (Fig. 2. Paragraph [0042]-LEWIS discloses in block 218, the method applies one or more adjustments, such as one or more filters and/or other adjustments, to the selected non-face image pixels. In paragraph [0043]-LEWIS discloses the filters can include white balance of the pixels or depicted features not included in the determined face mask. The method can also perform processing on face mask pixels of the image in blocks 208-212 (wherein method 200 and the face mask may be generated in conjunction with methods 300 and 500, and the non-facial pixels designated for white balancing are determined based on the facial color calculation). Moreover, in paragraph [0045]-LEWIS discloses the processing in blocks 212 and/or 218 can be performed in parallel. The processing of the face pixels or facial skin pixels can be performed using one or more types of filters, and the processing of the non-face and/or non-facial skin pixels can be performed substantially or partially simultaneously using one or more other filters, allowing efficiency through parallelization). Regarding claim 20, LEWIS explicitly teaches the non-transitory computer-readable medium of claim 16, LEWIS further teaches wherein determining the skin tone classification further comprises preprocessing the plurality of pixels of the face, wherein the preprocessing includes adjusting one or more statistics of the image (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region. In paragraph [0048]-LEWIS discloses in block 304, the method converts the colors of each of the pixels within the face region obtained in block 302 to a particular color space. In paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. The predetermined range of known skin colors can be defined as a predefined zone or area on a color space graph that indicates a desired range of known skin colors. In paragraph [0052]-LEWIS discloses the method can also bound the selected face region pixels by performing edge tests to further narrow the pixels in the selected set. The luminance of pixels can be compared to threshold values and pixels having a luminance below a lower threshold or above a higher threshold can be ignored and not selected. Such edge tests can also check if the saturation of the pixels is outside a predetermined threshold, and if so, remove such pixels from consideration) prior to providing the plurality of pixels of the face to the skin tone classifier (Fig. 3. Paragraph [0053]-LEWIS discloses in block 308, the method determines a characteristic skin color for the selected face region pixels. The characteristic skin color is the average color of the selected face region pixels. Please also see Fig. 2 and 5, and read paragraph [0035-0045 and 0054-0060]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3, 11, 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over LEWIS et al. (US 20140341442 A1), hereinafter referenced as LEWIS in view of YUAN et al. (US 20200137369 A1), hereinafter referenced as YUAN and in further view of TADA et al. (US 20180350048 A1), hereinafter referenced as TADA. Regarding claim 3, LEWIS explicitly teaches the computer-implemented method of claim 1, LEWIS further teaches wherein: the image includes a plurality of faces (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region (wherein the image may contain a plurality of faces, and the facial regions may be determined separately and independently). Further in paragraph [0016]-LEWIS discloses the system can identify one or more face regions of an image that include pixels that depict faces or portions of faces. Please also see Fig. 2 and 5, and read paragraph [0028-0029, 0063 and 0069]); determining the skin tone classification is performed for each of the plurality of faces (Fig. 3. Paragraph [0033]-LEWIS discloses if the face portions in the image are to be processed, then the method continues to block 208, where a face image mask for the image is determined. In paragraph [0046]-LEWIS discloses FIG. 3 is a flow diagram illustrating a method 300 for determining a face image mask, such as a facial skin mask that indicates which pixels of an image depict a person's facial skin. Method 300 can be used in the method 200 of FIG. 2 to determine a face mask (wherein method 300 includes determining a skin tone classification). Further in paragraph [0063]-LEWIS discloses the method 300 can be performed separately and independently for each face region identified in the image. Each face region can have its own characteristic skin color determined and its own pixels selected for inclusion in the skin pixels for that face, and each face can have a different amount of processing performed. Please also see Fig. 3 and 5, and read paragraph [0032-0042, 0048-0053 and 0066-0070]); determining the region of interest is performed for each of the plurality of faces (Fig. 3. Paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. Please also see Fig. 2 and 5 and read paragraph [0035-0043 and 0065-0069]); and performing the face color calculation includes (Fig. 3. Paragraph [0053]-LEWIS discloses referring back to FIG. 3, in block 308, the method determines a characteristic skin color for the selected face region pixels), for each face: calculating, for pixels in the region of interest, an average color value (Fig. 3. Paragraph [0053]-LEWIS discloses the characteristic skin color is the average color of the selected face region pixels, e.g., the average color component in each of the three R, G, and B color channels for the selected pixels. In block 310, the method determines information about the distribution of colors within the set of selected face region pixels. A standard deviation can be determined for each color channel. This information estimates how widely varying is the distribution of colors in the selected face region pixels. Please also see Fig. 2 and 5 and read paragraph [0035-0043 and 0065-0069]); LEWIS fails to explicitly teach obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values. However, YUAN explicitly teaches obtaining a ratio of pixel values for the region of interest to pixel values for the image (Fig. 3. Paragraph [0059]-YUAN discloses in 1021, an area proportion of the target region in the image is calculated according to the area occupied by the target region in the image. In paragraph [0060]-YUAN discloses the target region is the face region or the portrait region. The areas occupied by the face region and the portrait region in the image may be calculated to further calculate area proportions of the face region and the portrait region in the image); calculating a face weight (Fig. 3. Paragraph [0059]-YUAN discloses in 1021, an area proportion of the target region in the image is calculated according to the area occupied by the target region in the image. In paragraph [0060]-YUAN discloses the target region is the face region or the portrait region. The areas occupied by the face region and the portrait region in the image may be calculated to further calculate area proportions of the face region and the portrait region in the image. In paragraph [0061]-YUAN discloses the image is divided into multiple sub-blocks, and each sub-block has the same area (wherein an area of each sub-block is known, so that the area of the face region may be calculated). In paragraph [0064]-YUAN discloses a quotient obtained by dividing the area occupied by the target region by a total area of the image is the area proportion of the target region. In paragraph [0073]-YUAN discloses in 1023, a weight of the first gain value and a weight of the second gain value are determined according to the area proportion of the target region) based on the skin tone classification for the face and the ratio of the pixel values (Fig. 3. Paragraph [0065]-YUAN discloses in 1022, a first gain value and a second gain value of each color component are calculated according to the area proportion. In paragraph [0066]-YUAN discloses the first gain value is used to regulate the face in the image to the skin color. In paragraph [0068]-YUAN discloses color components of all the pixels of the face region are acquired, a color of each pixel is represented by a color component (R, G, B), and the color vectors of each pixel may be averaged to calculate a color vector corresponding to the skin color of the face. It is determined whether R, G and B values corresponding to the skin color of the face are within the range of R, G and B values corresponding to the normal face skin color. When R, G and B values corresponding to the skin color of the face are not within the range of R. G and B values corresponding to the normal face skin color, the R, G and B values corresponding to the skin color of the face are adjusted through a gain value to be within the range of R. G and B values corresponding to the normal face skin color, and the gain value is the first gain value); and Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS of having the computer-implemented method, with the teachings of YUAN of having obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values. Wherein LEWIS’s computer-implemented method having obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values. The motivation behind the modification would have been to obtain a computer-implemented method that improves automatic white balancing and color correction, since both LEWIS and YUAN concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while YUAN’s systems and methods provide improved white balancing regulation, user experience and color cast for faces in images. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and YUAN et al. (US 20200137369 A1), Abstract and Paragraph [0014]. LEWIS in view of YUAN fails to explicitly teach and dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. However, TADA explicitly teaches and dividing a product of the average color value (Fig. 3. Paragraph [0077]-TADA discloses FIG. 10A is a view illustrating the face regions 600a and 600b of FIG. 6, which are subjected to block division by the block integrating circuit 132, and the shine region 601b is included in the overlapping region of the face regions 600a and 600b. The system controller 107 calculates an average value of color information of a region 1001b of blocks (wherein 600a and 600b are different faces in an image that may overlap)) and the face weight by a sum of the face weights of the plurality of faces (Fig. 3. Paragraph [0081]-TADA discloses the system controller 107 calculates T value=(degree of reliability A)×(degree of reliability B)×(degree of reliability C) (wherein the degree of reliabilities are set based on whether the overlapping regions regions belongs to the face region). In paragraph [0083]-TADA discloses in a case where the skin color of the face region 600a is (R1, G1, B1) and the skin color of the face region 600b is (R2, G2, B2), the system controller 107 may obtain a weight average (Ra, Ga, Ba) of the skin colors of the face regions 600a and 600b by formulas (2) to (4), and may set the weight average as the target value of the skin color (wherein formulas (2)-(4) include Ra=(R1×T1+R2×T2)/(T1+T2), Ga=(G1×T1+G2×T2)/(T1+T2), and Ba=(B1×T1+B2×T2)/(T1+T2)). Please also read paragraph [0069-0070, and 0096-0102]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS in view of YUAN of having the computer-implemented method, with the teachings of TADA of having dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. Wherein LEWIS’s computer-implemented method having dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. The motivation behind the modification would have been to obtain a computer-implemented method that improves automatic white balancing, color correction and face obstruction detection, since both LEWIS and TADA concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while TADA’s systems and methods provide improved shine correction processing of a skin color region of a face image when there are overlapping faces. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and TADA et al. (US 20180350048 A1), Abstract and Paragraph [0100-0103]. Regarding claim 11, LEWIS explicitly teaches the computing device of claim 9, LEWIS further teaches wherein: the image includes a plurality of faces (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region (wherein the image may contain a plurality of faces, and the facial regions may be determined separately and independently). Further in paragraph [0016]-LEWIS discloses the system can identify one or more face regions of an image that include pixels that depict faces or portions of faces. Please also see Fig. 2 and 5, and read paragraph [0028-0029, 0063 and 0069]); determining the skin tone classification is performed for each of the plurality of faces (Fig. 3. Paragraph [0033]-LEWIS discloses if the face portions in the image are to be processed, then the method continues to block 208, where a face image mask for the image is determined. In paragraph [0046]-LEWIS discloses FIG. 3 is a flow diagram illustrating a method 300 for determining a face image mask, such as a facial skin mask that indicates which pixels of an image depict a person's facial skin. Method 300 can be used in the method 200 of FIG. 2 to determine a face mask (wherein method 300 includes determining a skin tone classification). Further in paragraph [0063]-LEWIS discloses the method 300 can be performed separately and independently for each face region identified in the image. Each face region can have its own characteristic skin color determined and its own pixels selected for inclusion in the skin pixels for that face, and each face can have a different amount of processing performed. Please also see Fig. 3 and 5, and read paragraph [0032-0042, 0048-0053 and 0066-0070]); determining the region of interest is performed for each of the plurality of faces (Fig. 3. Paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. Please also see Fig. 2 and 5 and read paragraph [0035-0043 and 0065-0069]); and performing the face color calculation includes (Fig. 3. Paragraph [0053]-LEWIS discloses referring back to FIG. 3, in block 308, the method determines a characteristic skin color for the selected face region pixels), for each face: calculating, for pixels in the region of interest, an average color value (Fig. 3. Paragraph [0053]-LEWIS discloses the characteristic skin color is the average color of the selected face region pixels, e.g., the average color component in each of the three R, G, and B color channels for the selected pixels. In block 310, the method determines information about the distribution of colors within the set of selected face region pixels. A standard deviation can be determined for each color channel. This information estimates how widely varying is the distribution of colors in the selected face region pixels. Please also see Fig. 2 and 5 and read paragraph [0035-0043 and 0065-0069]). LEWIS fails to explicitly teach obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values; and However, YUAN explicitly teaches obtaining a ratio of pixel values for the region of interest to pixel values for the image (Fig. 3. Paragraph [0059]-YUAN discloses in 1021, an area proportion of the target region in the image is calculated according to the area occupied by the target region in the image. In paragraph [0060]-YUAN discloses the target region is the face region or the portrait region. The areas occupied by the face region and the portrait region in the image may be calculated to further calculate area proportions of the face region and the portrait region in the image); calculating a face weight (Fig. 3. Paragraph [0059]-YUAN discloses in 1021, an area proportion of the target region in the image is calculated according to the area occupied by the target region in the image. In paragraph [0060]-YUAN discloses the target region is the face region or the portrait region. The areas occupied by the face region and the portrait region in the image may be calculated to further calculate area proportions of the face region and the portrait region in the image. In paragraph [0061]-YUAN discloses the image is divided into multiple sub-blocks, and each sub-block has the same area (wherein an area of each sub-block is known, so that the area of the face region may be calculated). In paragraph [0064]-YUAN discloses a quotient obtained by dividing the area occupied by the target region by a total area of the image is the area proportion of the target region. In paragraph [0073]-YUAN discloses in 1023, a weight of the first gain value and a weight of the second gain value are determined according to the area proportion of the target region) based on the skin tone classification for the face and the ratio of the pixel values (Fig. 3. Paragraph [0065]-YUAN discloses in 1022, a first gain value and a second gain value of each color component are calculated according to the area proportion. In paragraph [0066]-YUAN discloses the first gain value is used to regulate the face in the image to the skin color. In paragraph [0068]-YUAN discloses color components of all the pixels of the face region are acquired, a color of each pixel is represented by a color component (R, G, B), and the color vectors of each pixel may be averaged to calculate a color vector corresponding to the skin color of the face. It is determined whether R, G and B values corresponding to the skin color of the face are within the range of R, G and B values corresponding to the normal face skin color. When R, G and B values corresponding to the skin color of the face are not within the range of R. G and B values corresponding to the normal face skin color, the R, G and B values corresponding to the skin color of the face are adjusted through a gain value to be within the range of R. G and B values corresponding to the normal face skin color, and the gain value is the first gain value); and Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS of having the computing device, with the teachings of YUAN of having obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values. Wherein LEWIS’s computing device having obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values. The motivation behind the modification would have been to obtain a computing device that improves automatic white balancing and color correction, since both LEWIS and YUAN concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while YUAN’s systems and methods provide improved white balancing regulation, user experience and color cast for faces in images. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and YUAN et al. (US 20200137369 A1), Abstract and Paragraph [0014]. LEWIS in view of YUAN fail to explicitly teach dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. However, TADA explicitly teaches dividing a product of the average color value (Fig. 3. Paragraph [0077]-TADA discloses FIG. 10A is a view illustrating the face regions 600a and 600b of FIG. 6, which are subjected to block division by the block integrating circuit 132, and the shine region 601b is included in the overlapping region of the face regions 600a and 600b. The system controller 107 calculates an average value of color information of a region 1001b of blocks (wherein 600a and 600b are different faces in an image that may overlap)) and the face weight by a sum of the face weights of the plurality of faces(Fig. 3. Paragraph [0081]-TADA discloses the system controller 107 calculates T value=(degree of reliability A)×(degree of reliability B)×(degree of reliability C) (wherein the degree of reliabilities are set based on whether the overlapping regions regions belongs to the face region). In paragraph [0083]-TADA discloses in a case where the skin color of the face region 600a is (R1, G1, B1) and the skin color of the face region 600b is (R2, G2, B2), the system controller 107 may obtain a weight average (Ra, Ga, Ba) of the skin colors of the face regions 600a and 600b by formulas (2) to (4), and may set the weight average as the target value of the skin color (wherein formulas (2)-(4) include Ra=(R1×T1+R2×T2)/(T1+T2), Ga=(G1×T1+G2×T2)/(T1+T2), and Ba=(B1×T1+B2×T2)/(T1+T2)). Please also read paragraph [0069-0070, and 0096-0102]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS in view of YUAN of having the computing device, with the teachings of TADA of having dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. Wherein LEWIS’s computing device having dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. The motivation behind the modification would have been to obtain a computing device that improves automatic white balancing, color correction and face obstruction detection, since both LEWIS and TADA concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while TADA’s systems and methods provide improved shine correction processing of a skin color region of a face image when there are overlapping faces. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and TADA et al. (US 20180350048 A1), Abstract and Paragraph [0100-0103]. Regarding claim 14, LEWIS explicitly teaches the computing device of claim 11, LEWIS further teaches wherein the image includes a plurality of faces and wherein determining region of interest and performing face color calculation are repeated for each of the plurality of faces (Fig. 5. Paragraph [0016]-LEWIS discloses the system can identify one or more face regions of an image that include pixels that depict faces or portions of faces. In paragraph [0065]-LEWIS discloses in block 502, the method receives facial information identifying a face region. In paragraph [0066]-LEWIS discloses in block 504, the method finds additional pixels outside the face region that have a color similar to the face region pixels, and connects those additional pixels to the face region. In paragraph [0067]-LEWIS discloses a color range in the face region can be determined (such as the average color). In paragraph [0069]-LEWIS discloses the method 500 can be performed separately and independently for each face region identified in the image. Please also read paragraph [0063]). Regarding claim 18, LEWIS explicitly teaches the non-transitory computer-readable medium of claim 16, LEWIS further explicitly teaches wherein: the image includes a plurality of faces (Fig. 3. Paragraph [0047]-LEWIS discloses in block 302, the method receives facial information identifying a face region (wherein the image may contain a plurality of faces, and the facial regions may be determined separately and independently). Further in paragraph [0016]-LEWIS discloses the system can identify one or more face regions of an image that include pixels that depict faces or portions of faces. Please also see Fig. 2 and 5, and read paragraph [0028-0029, 0063 and 0069]); determining the skin tone classification is performed for each of the plurality of faces (Fig. 3. Paragraph [0033]-LEWIS discloses if the face portions in the image are to be processed, then the method continues to block 208, where a face image mask for the image is determined. In paragraph [0046]-LEWIS discloses FIG. 3 is a flow diagram illustrating a method 300 for determining a face image mask, such as a facial skin mask that indicates which pixels of an image depict a person's facial skin. Method 300 can be used in the method 200 of FIG. 2 to determine a face mask (wherein method 300 includes determining a skin tone classification). Further in paragraph [0063]-LEWIS discloses the method 300 can be performed separately and independently for each face region identified in the image. Each face region can have its own characteristic skin color determined and its own pixels selected for inclusion in the skin pixels for that face, and each face can have a different amount of processing performed. Please also see Fig. 3 and 5, and read paragraph [0032-0042, 0048-0053 and 0066-0070]); determining the region of interest is performed for each of the plurality of faces (Fig. 3. Paragraph [0049]-LEWIS discloses in block 306, the method examines the converted colors of the pixels within the face region obtained in block 302, compares these colors to a predetermined range of known skin colors, and selects face region pixels that are found within the predetermined color range. Please also see Fig. 2 and 5 and read paragraph [0035-0043 and 0065-0069]); and performing the face color calculation includes (Fig. 3. Paragraph [0053]-LEWIS discloses referring back to FIG. 3, in block 308, the method determines a characteristic skin color for the selected face region pixels), for each face: calculating, for pixels in the region of interest, an average color value (Fig. 3. Paragraph [0053]-LEWIS discloses the characteristic skin color is the average color of the selected face region pixels, e.g., the average color component in each of the three R, G, and B color channels for the selected pixels. In block 310, the method determines information about the distribution of colors within the set of selected face region pixels. A standard deviation can be determined for each color channel. This information estimates how widely varying is the distribution of colors in the selected face region pixels. Please also see Fig. 2 and 5 and read paragraph [0035-0043 and 0065-0069]); LEWIS fails to explicitly teach obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values; and However, YUAN explicitly teaches obtaining a ratio of pixel values for the region of interest to pixel values for the image (Fig. 3. Paragraph [0059]-YUAN discloses in 1021, an area proportion of the target region in the image is calculated according to the area occupied by the target region in the image. In paragraph [0060]-YUAN discloses the target region is the face region or the portrait region. The areas occupied by the face region and the portrait region in the image may be calculated to further calculate area proportions of the face region and the portrait region in the image); calculating a face weight (Fig. 3. Paragraph [0059]-YUAN discloses in 1021, an area proportion of the target region in the image is calculated according to the area occupied by the target region in the image. In paragraph [0060]-YUAN discloses the target region is the face region or the portrait region. The areas occupied by the face region and the portrait region in the image may be calculated to further calculate area proportions of the face region and the portrait region in the image. In paragraph [0061]-YUAN discloses the image is divided into multiple sub-blocks, and each sub-block has the same area (wherein an area of each sub-block is known, so that the area of the face region may be calculated). In paragraph [0064]-YUAN discloses a quotient obtained by dividing the area occupied by the target region by a total area of the image is the area proportion of the target region. In paragraph [0073]-YUAN discloses in 1023, a weight of the first gain value and a weight of the second gain value are determined according to the area proportion of the target region) based on the skin tone classification for the face and the ratio of the pixel values (Fig. 3. Paragraph [0065]-YUAN discloses in 1022, a first gain value and a second gain value of each color component are calculated according to the area proportion. In paragraph [0066]-YUAN discloses the first gain value is used to regulate the face in the image to the skin color. In paragraph [0068]-YUAN discloses color components of all the pixels of the face region are acquired, a color of each pixel is represented by a color component (R, G, B), and the color vectors of each pixel may be averaged to calculate a color vector corresponding to the skin color of the face. It is determined whether R, G and B values corresponding to the skin color of the face are within the range of R, G and B values corresponding to the normal face skin color. When R, G and B values corresponding to the skin color of the face are not within the range of R. G and B values corresponding to the normal face skin color, the R, G and B values corresponding to the skin color of the face are adjusted through a gain value to be within the range of R. G and B values corresponding to the normal face skin color, and the gain value is the first gain value); and Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS of having the non-transitory computer-readable medium, with the teachings of YUAN of having obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values. Wherein LEWIS’s non-transitory computer-readable medium having obtaining a ratio of pixel values for the region of interest to pixel values for the image; calculating a face weight based on the skin tone classification for the face and the ratio of the pixel values. The motivation behind the modification would have been to obtain a non-transitory computer-readable medium that improves automatic white balancing and color correction, since both LEWIS and YUAN concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while YUAN’s systems and methods provide improved white balancing regulation, user experience and color cast for faces in images. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and YUAN et al. (US 20200137369 A1), Abstract and Paragraph [0014]. LEWIS in view of YUAN fail to explicitly teach dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. However, TADA explicitly teaches dividing a product of the average color value (Fig. 3. Paragraph [0077]-TADA discloses FIG. 10A is a view illustrating the face regions 600a and 600b of FIG. 6, which are subjected to block division by the block integrating circuit 132, and the shine region 601b is included in the overlapping region of the face regions 600a and 600b. The system controller 107 calculates an average value of color information of a region 1001b of blocks (wherein 600a and 600b are different faces in an image that may overlap)) and the face weight by a sum of the face weights of the plurality of faces (Fig. 3. Paragraph [0081]-TADA discloses the system controller 107 calculates T value=(degree of reliability A)×(degree of reliability B)×(degree of reliability C) (wherein the degree of reliabilities are set based on whether the overlapping regions regions belongs to the face region). In paragraph [0083]-TADA discloses in a case where the skin color of the face region 600a is (R1, G1, B1) and the skin color of the face region 600b is (R2, G2, B2), the system controller 107 may obtain a weight average (Ra, Ga, Ba) of the skin colors of the face regions 600a and 600b by formulas (2) to (4), and may set the weight average as the target value of the skin color (wherein formulas (2)-(4) include Ra=(R1×T1+R2×T2)/(T1+T2), Ga=(G1×T1+G2×T2)/(T1+T2), and Ba=(B1×T1+B2×T2)/(T1+T2)). Please also read paragraph [0069-0070, and 0096-0102]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS in view of YUAN of having the non-transitory computer-readable medium, with the teachings of TADA of having dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. Wherein LEWIS’s non-transitory computer-readable medium having dividing a product of the average color value and the face weight by a sum of the face weights of the plurality of faces. The motivation behind the modification would have been to obtain a non-transitory computer-readable medium that improves automatic white balancing, color correction and face obstruction detection, since both LEWIS and TADA concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while TADA’s systems and methods provide improved shine correction processing of a skin color region of a face image when there are overlapping faces. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and TADA et al. (US 20180350048 A1), Abstract and Paragraph [0100-0103]. Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over LEWIS et al. (US 20140341442 A1), hereinafter referenced as LEWIS in view of Cao et al. (US 11935322 B1), hereinafter referenced as Cao. Regarding claim 6, LEWIS explicitly teaches the computer-implemented method of claim 1, LEWIS fails to explicitly teach wherein the face is at least partially occluded by a face covering and wherein the region of interest excludes at least a portion of the face covering. However, Cao explicitly teaches wherein the face is at least partially occluded by a face covering (Fig. 4A. Column [07], Line [05-12]-Cao discloses at Step 408, the method 400 may identify obstructions in one or more of the plurality of regions. As mentioned above, obstructions may comprise facial hair, head hair, large or oversized glasses or sunglasses, facial and/or head coverings, face masks, or clothing, such as scarves or hoods, etc., or any other pixels having a color and/or texture that is determined not to be indicative of a human skin tones, e.g., according to a trained face obstruction detection mode), and wherein the region of interest excludes at least a portion of the face covering (Fig. 4A. Column [07], Line [13-19]-Cao discloses Step 410, the method 400 may select a subset of regions, based on the identified obstructions. The method 400 may only select regions that have no obstruction pixels detected within them. The method 400 may select regions that have fewer than a threshold number of obstruction pixels (e.g., 5% obstructed pixels) detected within them. Please also read Column [05], Line [16-28]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS of having a computer-implemented method to adjust white balance in an image, with the teachings of Cao of having wherein the face is at least partially occluded by a face covering, and wherein the region of interest excludes at least a portion of the face covering. Wherein LEWIS’s computer-implemented method having wherein the face is at least partially occluded by a face covering, and wherein the region of interest excludes at least a portion of the face covering. The motivation behind the modification would have been to obtain a computer-implemented method that improves automatic white balancing, color correction and face obstruction detection, since both LEWIS and Cao concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while Cao’s systems and methods provide improved face obstruction detection models that may be leveraged to provide improved image processing, e.g., auto white balancing (AWB) or other image color correction-related processing tasks. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and Cao et al. (US 11935322 B1), Abstract and Column [01], Line [41-60]. Regarding claim 8, LEWIS explicitly teaches the computer-implemented method of claim 7, LEWIS is silent on further comprising determining that a particular face of the plurality of faces does not meet a size threshold based on a total number of pixels corresponding to the particular face being less than a threshold number of pixels wherein the particular face is removed prior to determining the region of interest and performing the face color calculation. However, Cao explicitly teaches further comprising determining that a particular face of the plurality of faces (Fig. 1. Column [03], Line [17-21]-Cao discloses the plurality of regions comprising the first face (and any other detected faces in the input image) may be sent to a face obstruction detection model (block 120) to determine whether or not there are likely any obstructions covering portions of the first face or other detected faces) does not meet a size threshold based on a total number of pixels corresponding to the particular face being less than a threshold number of pixels (Fig. 4A. Column [05], Line [16-28]-Cao discloses first, at Step 402, the method 400 may begin by obtaining an input image. Next, at Step 404, the method 400 may identify a first face in the input image. A face may have to pass one or more quality thresholds before qualifying to be included in the operation of process 400 (e.g., a minimum size requirement). Please also see Column [07], Line [13-19]), wherein the particular face is removed prior to determining the region of interest and performing the face color calculation (Fig. 4A. Column [06], Line [64-66]-Cao discloses at Step 406, the method 400 may divide the first face into a plurality of regions. At Column [07], Line [13-14]-Cao discloses at Step 410, the method 400 may select a subset of regions, based on the identified obstructions. At Column [07], Line [20-25]-Cao discloses at Step 412, the method 400 may determine a first white point (or determine any other desired image color correction-related property, e.g., skin color distribution) for the first face based, at least in part, on the selected subset of regions (e.g., based only on the pixels within the selected subset of regions that are determined to be non-obstructed pixels, such as skin pixels, within the selected subset of regions). At Column [07], Line [28-31]-Cao discloses at at Step 414, the method 400 may optionally perform a white balancing operation (or perform any other desired image color correction-related processing task, e.g., skin tone color correction) on the input image based, at least in part, on the determined first white point (or other desired image color correction-related property, e.g., skin color distribution)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS of having a computer-implemented method to adjust white balance in an image, with the teachings of Cao of having further comprising determining that a particular face of the plurality of faces does not meet a size threshold based on a total number of pixels corresponding to the particular face being less than a threshold number of pixels, wherein the particular face is removed prior to determining the region of interest and performing the face color calculation. Wherein Lu’s computer-implemented method having further comprising determining that a particular face of the plurality of faces does not meet a size threshold based on a total number of pixels corresponding to the particular face being less than a threshold number of pixels, wherein the particular face is removed prior to determining the region of interest and performing the face color calculation. The motivation behind the modification would have been to obtain a computer-implemented method that improves automatic white balancing, color correction and face obstruction detection, since both LEWIS and Cao concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while Cao’s systems and methods provide improved face obstruction detection models that may be leveraged to provide improved image processing, e.g., auto white balancing (AWB) or other image color correction-related processing tasks. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and Cao et al. (US 11935322 B1), Abstract and Column [01], Line [41-60]. Claims 15 is rejected under 35 U.S.C. 103 as being unpatentable over LEWIS et al. (US 20140341442 A1), hereinafter referenced as LEWIS in view of YUAN et al. (US 20200137369 A1), hereinafter referenced as YUAN and in further view of TADA et al. (US 20180350048 A1), hereinafter referenced as TADA and in further view of Cao et al. (US 11935322 B1), hereinafter referenced as Cao. Regarding claim 15, LEWIS in view of YUAN in further view of TADA explicitly teaches the computing device of claim 14, LEWIS in view of YUAN fails to explicitly teach further comprising determining that a particular face of the plurality of faces does not meet a size threshold based on a total number of pixels corresponding to the particular face being less than a threshold number of pixels, wherein the particular face is removed from the plurality of faces prior to determining the region of interest and performing the face color calculation. However, Cao explicitly teaches further comprising determining that a particular face of the plurality of faces (Fig. 1. Column [03], Line [17-21]-Cao discloses the plurality of regions comprising the first face (and any other detected faces in the input image) may be sent to a face obstruction detection model (block 120) to determine whether or not there are likely any obstructions covering portions of the first face or other detected faces) does not meet a size threshold based on a total number of pixels corresponding to the particular face being less than a threshold number of pixels (Fig. 4A. Column [05], Line [16-28]-Cao discloses first, at Step 402, the method 400 may begin by obtaining an input image. Next, at Step 404, the method 400 may identify a first face in the input image. A face may have to pass one or more quality thresholds before qualifying to be included in the operation of process 400 (e.g., a minimum size requirement). Please also see Column [07], Line [13-19]), wherein the particular face is removed from the plurality of faces prior to determining the region of interest and performing the face color calculation (Fig. 4A. Column [06], Line [64-66]-Cao discloses at Step 406, the method 400 may divide the first face into a plurality of regions. At Column [07], Line [13-14]-Cao discloses at Step 410, the method 400 may select a subset of regions, based on the identified obstructions. At Column [07], Line [20-25]-Cao discloses at Step 412, the method 400 may determine a first white point (or determine any other desired image color correction-related property, e.g., skin color distribution) for the first face based, at least in part, on the selected subset of regions (e.g., based only on the pixels within the selected subset of regions that are determined to be non-obstructed pixels, such as skin pixels, within the selected subset of regions). At Column [07], Line [28-31]-Cao discloses at at Step 414, the method 400 may optionally perform a white balancing operation (or perform any other desired image color correction-related processing task, e.g., skin tone color correction) on the input image based, at least in part, on the determined first white point (or other desired image color correction-related property, e.g., skin color distribution). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of LEWIS in view of YUAN and in further view of TADA of having a computing device, with the teachings of Cao of having further comprising determining that a particular face of the plurality of faces does not meet a size threshold based on a total number of pixels corresponding to the particular face being less than a threshold number of pixels, wherein the particular face is removed from the plurality of faces prior to determining the region of interest and performing the face color calculation. Wherein having LEWIS’s computing device to adjust white balance in an image having further comprising determining that a particular face of the plurality of faces does not meet a size threshold based on a total number of pixels corresponding to the particular face being less than a threshold number of pixels, wherein the particular face is removed from the plurality of faces prior to determining the region of interest and performing the face color calculation. The motivation behind the modification would have been to obtain a computing device that improves automatic white balancing, color correction and face obstruction detection, since both LEWIS and Cao concern white balancing in images. Wherein LEWIS’s systems and methods provides more accurate generation of a face image mask and robust selection of faces and skin, while Cao’s systems and methods provide improved face obstruction detection models that may be leveraged to provide improved image processing, e.g., auto white balancing (AWB) or other image color correction-related processing tasks. Please see LEWIS et al. (US 20140341442 A1), Abstract and Paragraph [0015-0020] and Cao et al. (US 11935322 B1), Abstract and Column [01], Line [41-60]. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure. LU et al. (US 20110234845 A1)- An image processing system auto white balances an image using an object in the image and a reference color distribution. Given an input image, a target object in the input image is identified. A reference color distribution for the object type of the target object from the input image is accessed. One or more image processing settings are determined that, when applied to the input image, minimize a difference in values between pixels of the target object and the reference color distribution. A white balanced image is generated by applying the one or more image processing settings to the input image, and the white balanced image is provided for presentation........................... Please see Fig. 1 and 5. Abstract. ZHOU et al. (US 20230169749 A1)- This application provides a skin color detection method and apparatus. The skin color detection method includes: obtaining a face image (101); determining a face key point (102) in the face image; determining a skin color estimation region of interest (Region Of Interest, ROI) and an illumination estimation region of interest ROI (103) in the face image based on the face key point; obtaining a detected skin color value (104) corresponding to the skin color estimation region of interest; obtaining a detected illumination color value (105) corresponding to the illumination estimation region of interest; and using the detected skin color value and the detected illumination color value as feature input of a skin color estimation model, and obtaining a corrected skin color value (106) output by the skin color estimation model........................... Please see Fig. 1-5. Corcoran et al. (US 20130236052 A1)- A technique for processing a digital image uses face detection to achieve one or more desired image processing parameters. A group of pixels is identified that corresponds to a face image within the digital image. A skin tone is detected for the face image by determining one or more default color or tonal values, or combinations thereof, for the group of pixels. Values of one or more parameters are adjusted for the group of pixels that correspond to the face image based on the detected skin tone........................ Please see Fig. 1-3. Abstract. Freeman et al. (US 20190122404 A1)- A system and method of augmenting image data are described. In one embodiment, the method comprises receiving data of an image captured by a camera, the captured image including a region having a visible feature of an object, storing masking data defining a plurality of masks, each mask defining a respective masked portion of the region of the captured image, sampling pixel values at predefined locations of the captured image data, selecting at least one stored mask based on the sampled pixel values, modifying pixel values in the or each selected masked portion of the region of the captured image based on colourisation parameters, and outputting the captured image with the modified pixel values for display. In other embodiments pixel values of one or more identified regions of a face in a target image are modified based on the augmentation characteristics derived from corresponding identified regions of a face in a source image......................... Please see Fig. 1-5. Abstract. HUAI et al. (US 20210314541 A1)- A method of face recognition of low computation complexity is disclosed comprising computing a similarity between the intrinsic values of facial color features of facial skin, eyeball white and teeth of a first face and a second face under the same illuminant estimation.......................... Please see Fig. 1-3. Abstract. HUANG et al. (US 20120099786 A1)- A method for repairing scar images is provided, in which a facial region of an image is detected, a first average skin tone value is subtracted from an original pixel value of at least one pixel to generate a first mask value, the first mask value is divided by a constant to generate a first modified mask value; and the first modified mask value is added to the first average skin tone value to generate a first pixel value to serve as a compensated scar pixel value of the pixel.......................... Please see Fig. 1. Abstract. FUJIWARA et al. (US 20110234845 A1)- A normal AWB (auto white balance) correction value is calculated based on inputted image data. Further, a face area is identified from the inputted image data and a face AWB correction value is calculated based on image data in the face area. Then, first feature data and second feature data are extracted from the inputted image data and image data in the face area, respectively. A total AWB correction value is calculated in accordance with at least one of the face AWB correction value and the normal AWB correction value based on a comparison result of the first feature data and the second feature data. Thus, an erroneous correction can be prevented in an AWB correction using a face detection function.......................... Please see Fig. 3-6. Abstract. TUNA et al. (US 20150312540 A1)- An apparatus and methods for estimating a chromaticity of illumination from raw image data. In an embodiment, one or more image chromaticity weight is determined based on a distance between the raw image data in a sensor chromaticity space and a nearest point within a locus of sensor illumination chromaticities. In a further embodiment, one or more image chromaticity weight is determined based on a disparity among normalized color channel values. In certain embodiments, image chromaticity estimates are utilized to determine a white point estimate for the raw image data. In embodiments, an electronic device including a camera estimates the chromaticity value of raw image data captured by the camera as part of an AWB pipeline. The electronic device may further determine, for example as part of the AWB pipeline, a white point estimate based, at least in part, on the raw image data chromaticity value estimate(s)........................... Please see Fig. 1-4. Abstract. Ding et al. (US 20210279445 A1)- A skin detection method includes: dividing a region of interest in a face image into a highlighted region and a non-highlighted region; separately determining a first segmentation threshold of the highlighted region and a second segmentation threshold of the non-highlighted region; obtaining a binary image of the highlighted region based on the first segmentation threshold, and obtaining a binary image of the non-highlighted region based on the second segmentation threshold; fusing the binary image of the highlighted region and the binary image of the non-highlighted region; and identifying, based on a fused image, pores and/or blackheads included in the region of interest......................... Please see Fig. 3A, 5A, 6A-B,. Abstract. Kuo et al. (US 20190377969 A1)-A computing device with a digital camera obtains a reference image depicting at least one reference color and calibrates parameters of the digital camera based on the at least one reference color. The computing device captures, by the digital camera, a digital image of an individual utilizing the calibrated parameters. The computing device defines a region of interest in a facial region of the individual depicted in the digital image captured by the digital camera. The computing device generates a skin tone profile for pixels within the region of interest and displays a predetermined makeup product recommendation based on the skin tone profile....................... Please see Fig. 1-3. Abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Dec 12, 2022
Application Filed
Jan 23, 2025
Non-Final Rejection — §102, §103
Apr 28, 2025
Response Filed
May 15, 2025
Final Rejection — §102, §103
Nov 03, 2025
Request for Continued Examination
Nov 13, 2025
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555249
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR SUPPORTING VIRTUAL GOLF SIMULATION
2y 5m to grant Granted Feb 17, 2026
Patent 12548171
INFORMATION PROCESSING APPARATUS, METHOD AND MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12541822
METHOD AND APPARATUS OF PROCESSING IMAGE, COMPUTING DEVICE, AND MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12505503
IMAGE ENHANCEMENT
2y 5m to grant Granted Dec 23, 2025
Patent 12482106
METHOD AND ELECTRONIC DEVICE FOR SEGMENTING OBJECTS IN SCENE
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+33.3%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month