Prosecution Insights
Last updated: April 19, 2026
Application No. 18/109,310

METHODS AND SYSTEMS FOR VIRTUAL HAIR COLORING

Non-Final OA §103
Filed
Feb 14, 2023
Examiner
CHEN, JOSHUA NMN
Art Unit
2665
Tech Center
2600 — Communications
Assignee
L'Oréal
OA Round
3 (Non-Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
34 granted / 40 resolved
+23.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
20 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§103
Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/17/2025 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 17 is objected to because of the following informalities: the last two line of claim 17, a post-processing component comprising computational circuitry to apply a guided filter to the output image, appears substantially similar to the previous limitation. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-5, 10, 12-13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Mallick et al. (US 2012/0075331 A1, hereinafter Mallick) in view of Lipowezky ( Automatic Hair Colorization Using Chromaticity Distribution Matching, hereinafter Lipowezky) and Zhang et al. (Deep Color Consistent Network for Low-Light Image Enhancement, hereinafter Zhang). Regarding claim 1, Mallick discloses A computer-implemented method comprising executing on a processor (Fig. 1 Processor 107) one or more steps comprising: mapping gray levels from the swatch image and gray levels from a hair portion of an input image by matching their respective frequencies to establish a map relationship between the swatch image and the hair portion, wherein the gray levels of the swatch image are associated to respective swatch color values (Fig. 3 Apply Color 340, Para [0009]: “Systems and methods are provided for digital hair coloring”, Para [0020]: “A target color distribution based on a target hair color is obtained. The target color distribution may be based on an obtained target hair color, such as a target hair color selected through a user interface configured to allow a user to select a target hair color”, Para [0026]: “The target color distribution may include three histograms corresponding to three color channels of a selected color space, such as RGB, CIE Lab, HSY, YIQ, YUY, YCbCr, CMYK, CIE XYZ, HSI, or any other color space. The target color distribution may also include n histograms corresponding to n channels of a selected color space. When the target color distribution is based on histograms corresponding to color channels, the output image may be generated by applying a nonlinear transformation, such as histogram matching applied to one or more color channels, to the starting image to map a starting color distribution of the starting image to the target color distribution, where the color transformation is independent of pixel location”, Para [0059]: “In one or more embodiments, the color distribution is represented by n histograms, each histogram corresponding to a color channel in the color space of the starting image. In RGB color space, the color distribution may be represented as three histograms corresponding to the three color channels”, Para [0072]: “Digital hair color user interface 300 further includes hair color selection interface 340. Hair color selection interface 340 is configured to allow a user to select target hair color 338. For example, hair color selection interface 340 may be configured to display a selection of available hair colors 330-336 from which the user may select target hair color 338. Each available hair color 330-336 is associated with a color distribution”); and coloring a pixel in the hair portion based on a swatch color value determined using a gray level of the pixel and the map relationship (Para [0059]: “In one or more embodiments, the color distribution is represented by n histograms, each histogram corresponding to a color channel in the color space of the starting image. In RGB color space, the color distribution may be represented as three histograms corresponding to the three color channels”, Claim, 11: “The computer-implemented method of claim 9, wherein said generating said output image comprises: applying a nonlinear transformation to said starting image to map a starting color distribution of said starting image to said target color distribution”). However Mallick does not explicitly discloses preprocessing a swatch image using a deep neural network to correct accuracy of the coloring therein. Allowing user to select swatch image from other images instead of prepared swatch images. Lipowezky teaches allowing user to select swatch image from other images instead of prepared swatch images (Fig. 1c, This step is important to be included even if the claims does not include language directed to this step. Since Mallick’s swatch images are prepared before deployment to user, the swatch images are already prepared when user selects it and thus there is no reason to correct accuracy of the color of the swatch image since it should be at the correct color already.). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mallick with user choosing the swatch image from a user uploaded image of Lipowezky to effectively improve the user experience when choosing a new hair color. However Mallick in view of Lipowezky does not explicitly teaches preprocessing a swatch image using a deep neural network to correct accuracy of the coloring therein. Zhang teaches preprocessing a swatch image using a deep neural network to correct accuracy of the coloring therein (Figure 2, P. 3 Proposed Method: “In this section, we introduce the framework (see Figure 2) and details of DCC-Net, which aims at preserving the color consistency and naturalness in obtaining normal-light images. DCC-Net has three sub-nets (i.e., G-Net, C-Net, R-Net) and one pyramid color embedding (PCE) module.”; It is reasonable to for a person with ordinary skill in the art to apply the processing technique of one type of image to another type of image.). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mallick in view of Lipowezky with low light enhancement of an image of Zhang to effectively better retain the color accuracy when improving the illumination of an image. Regarding claim 3, dependent upon claim 1, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 1. Mallick further discloses the respective frequencies are probabilities of the gray levels occurring in the swatch image or the hair portion, respectively (Para [0013]: “As used herein, the term "color distribution" refers to any mathematical description of the color composition of a set of pixels of an image, including an image region or an entire image. In one or more embodiments, a color distribution is a probability distribution of pixel values”, Para [0020]: “A target color distribution based on a target hair color is obtained. The target color distribution may be based on an obtained target hair color, such as a target hair color selected through a user interface configured to allow a user to select a target hair color”). Regarding claim 4, dependent upon claim 1, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 1. Mallick further discloses the frequencies are represented by histograms or by cumulative distribution functions (Para [0026]: “The target color distribution may include three histograms corresponding to three color channels of a selected color space, … When the target color distribution is based on histograms corresponding to color channels, the output image may be generated by applying a nonlinear transformation, such as histogram matching applied to one or more color channels, to the starting image to map a starting color distribution of the starting image to the target color distribution, where the color transformation is independent of pixel location”, Para [0059]: “In one or more embodiments, the color distribution is represented by n histograms, each histogram corresponding to a color channel in the color space of the starting image. In RGB color space, the color distribution may be represented as three histograms corresponding to the three color channels”, Para [0097]: “When the target color distribution is based on histograms corresponding to color channels, a nonlinear color transform may be used to map a starting color distribution of the starting image to the target color distribution, where the color transformation is independent of pixel location. In one or more embodiments, the nonlinear color transform is histogram matching. For example, where the starting image is described in a three channel color space, each of the starting three color histograms computed from the pixels in the hair region of the starting image are each independently transformed to the three color histograms of the target color distribution). Regarding claim 5, dependent upon claim 1, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 1. Mallick further discloses calculating a mapping table that maps the gray levels from the hair portion to the swatch color values (Para [0101]: “A nonlinear mapping from transformed source colors bs to transformed target colors can be established with histogram matching on at least one of the color channels of the transformed sample colors bs and bt. Denoting this mapping as H, colors from the starting image may then be transformed by the nonlinear transformation: V t * S t * H S s - 1 * V s ' * c s - m s + m t The mapping H may be stored as three look up tables.”). Regarding claim 10, dependent upon claim 1, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 1. Mallick further discloses displaying an output image comprising the input image and the hair portion as colored (Fig. 3, 9, and 10, Para [0078]: “Digital hair coloring user interface 300 further includes output image display area 326. Output image display area 326 is configured to display output image 328. Output image 328 shows the result of the digital hair coloring procedure performed on starting image 200 to digitally color a hair region 204 of subject 202 based on target hair color 338. The color distribution associated with target hair color 338 is used to generate output image 328”). Regarding claim 12, dependent upon claim 1, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 1. Mallick further discloses calculating the frequencies of each of the gray levels in the swatch image; and calculating the frequencies of each of the gray levels in the hair portion of the input image (Para [0026]: “The target color distribution may include three histograms corresponding to three color channels of a selected color space, … When the target color distribution is based on histograms corresponding to color channels, the output image may be generated by applying a nonlinear transformation, such as histogram matching applied to one or more color channels, to the starting image to map a starting color distribution of the starting image to the target color distribution, where the color transformation is independent of pixel location”, Para [0059]: “In one or more embodiments, the color distribution is represented by n histograms, each histogram corresponding to a color channel in the color space of the starting image. In RGB color space, the color distribution may be represented as three histograms corresponding to the three color channels”, Para [0097]: “When the target color distribution is based on histograms corresponding to color channels, a nonlinear color transform may be used to map a starting color distribution of the starting image to the target color distribution, where the color transformation is independent of pixel location. In one or more embodiments, the nonlinear color transform is histogram matching. For example, where the starting image is described in a three channel color space, each of the starting three color histograms computed from the pixels in the hair region of the starting image are each independently transformed to the three color histograms of the target color distribution”). Regarding claim 13 and 18, Mallick discloses Claim 13: A system comprising: Claim 18: A computer-implemented method comprising executing on a processor one or more steps comprising: coloring in a virtual try on (VTO) rendering pipeline respective pixels in a hair portion of an input image based on respective color values as corrected from the swatch image, wherein the respective color values are selected from the swatch image using, for each respective pixel, a gray value of the respective pixel and a mapping relationship to a gray value of the swatch image associated to a respective color value (Fig. 3 Apply Color 340, Para [0009]: “Systems and methods are provided for digital hair coloring”, Para [0020]: “A target color distribution based on a target hair color is obtained. The target color distribution may be based on an obtained target hair color, such as a target hair color selected through a user interface configured to allow a user to select a target hair color”, Para [0026]: “The target color distribution may include three histograms corresponding to three color channels of a selected color space, such as RGB, CIE Lab, HSY, YIQ, YUY, YCbCr, CMYK, CIE XYZ, HSI, or any other color space. The target color distribution may also include n histograms corresponding to n channels of a selected color space. When the target color distribution is based on histograms corresponding to color channels, the output image may be generated by applying a nonlinear transformation, such as histogram matching applied to one or more color channels, to the starting image to map a starting color distribution of the starting image to the target color distribution, where the color transformation is independent of pixel location”, Para [0059]: “In one or more embodiments, the color distribution is represented by n histograms, each histogram corresponding to a color channel in the color space of the starting image. In RGB color space, the color distribution may be represented as three histograms corresponding to the three color channels”, Para [0072]: “Digital hair color user interface 300 further includes hair color selection interface 340. Hair color selection interface 340 is configured to allow a user to select target hair color 338. For example, hair color selection interface 340 may be configured to display a selection of available hair colors 330-336 from which the user may select target hair color 338. Each available hair color 330-336 is associated with a color distribution”); and presenting in a user interface an output image comprising the input image and the hair portion as colored (Fig. 3, 9, and 10, Para [0078]: “Digital hair coloring user interface 300 further includes output image display area 326. Output image display area 326 is configured to display output image 328. Output image 328 shows the result of the digital hair coloring procedure performed on starting image 200 to digitally color a hair region 204 of subject 202 based on target hair color 338. The color distribution associated with target hair color 338 is used to generate output image 328”). However, Mallick does not explicitly discloses preprocessing a swatch image using a deep neural network to correct accuracy of the coloring therein; Allowing user to select swatch image from other images instead of prepared swatch images. Lipowezky teaches allowing user to select swatch image from other images instead of prepared swatch images (Fig. 1c, This step is important to be included even if the claims does not include language directed to this step. Since Mallick’s swatch images are prepared before deployment to user, the swatch images are already prepared when user selects it and thus there is no reason to correct accuracy of the color of the swatch image since it should be at the correct color already.). However Mallick in view of Lipowezky does not explicitly teaches preprocessing a swatch image using a deep neural network to correct accuracy of the coloring therein. Zhang teaches preprocessing a swatch image using a deep neural network to correct accuracy of the coloring therein(Figure 2, P. 3 Proposed Method: “In this section, we introduce the framework (see Figure 2) and details of DCC-Net, which aims at preserving the color consistency and naturalness in obtaining normal-light images. DCC-Net has three sub-nets (i.e., G-Net, C-Net, R-Net) and one pyramid color embedding (PCE) module.”; It is reasonable to for a person with ordinary skill in the art to apply the processing technique of one type of image to another type of image.). Claims 2, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mallick et al. (US 2012/0075331 A1, hereinafter Mallick) in view of Lipowezky ( Automatic Hair Colorization Using Chromaticity Distribution Matching, hereinafter Lipowezky), Zhang et al. (Deep Color Consistent Network for Low-Light Image Enhancement, hereinafter Zhang), and Liu et al. (US 2022/0327749 A1, hereinafter Liu). Regarding claim 2, dependent upon claim 1, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 1. However, Mallick in view of Lipowezky and Zhang does not teach determining the hair portion from the input image via a deep neural network. Liu teaches determining the hair portion from the input image via a deep neural network (Para [0037]: “the first mask image is then outputted based on the neural network model. The first mask image represents the hair area in the portrait image. The neural network model may be another model such as a convolutional neural network (CNN) model, which is not limited in this embodiment”). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mallick in view of Lipowezky and Zhang with using neural network to segment out a mask that represents the hair region of a portrait of Liu to effectively improve the user experience of virtual hair coloring. Regarding claim 14, dependent upon claim 13, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 13. However, Mallick in view of Lipowezky and Zhang does not teach a hair detection engine comprising computational circuitry to determine the hair portion from the input image via a deep neural network. Liu teaches a hair detection engine comprising computational circuitry to determine the hair portion from the input image via a deep neural network (Para [0037]: “the first mask image is then outputted based on the neural network model. The first mask image represents the hair area in the portrait image. The neural network model may be another model such as a convolutional neural network (CNN) model, which is not limited in this embodiment”). Regarding claim 19, dependent upon claim 18, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 18. However, Mallick in view of Lipowezky and Zhang does not teach determining the hair portion from the input image via a deep neural network of a hair detection engine. Liu teaches determining the hair portion from the input image via a deep neural network of a hair detection engine (Para [0037]: “the first mask image is then outputted based on the neural network model. The first mask image represents the hair area in the portrait image. The neural network model may be another model such as a convolutional neural network (CNN) model, which is not limited in this embodiment”). Claims 6, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mallick et al. (US 2012/0075331 A1, hereinafter Mallick) in view of Lipowezky ( Automatic Hair Colorization Using Chromaticity Distribution Matching, hereinafter Lipowezky), Zhang et al. (Deep Color Consistent Network for Low-Light Image Enhancement, hereinafter Zhang), and Anderegg (US 11,069,119 B1, hereinafter Anderegg). Regarding claim 6, dependent upon claim 5, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 5. However, Mallick in view of Lipowezky and Zhang does not explicitly teach coloring the pixel of the hair portion is performed by a Graphics Processing Unit (GPU) using a shader (Examiner notes that a GPU is disclosed in Mallick at Para [0048] to accelerate graphic rendering process. It is well known for people with ordinary skill in the art that a GPU nowadays will have a shader. Nevertheless, Mallick only discloses a GPU without explicitly stating that a shader exists within the system). Anderegg teaches coloring the pixel of the hair portion is performed by a Graphics Processing Unit (GPU) using a shader (Col. 9 Lns. 44-49: “For instance, certain shaders 302 and/or 304 and their corresponding shader components may be configured to execute on a GPU to program the GPU's rendering pipeline (as opposed to operating with a fixed-function pipeline that only allows for common geometry transforming and pixel-shading functions)”, Col. 9 Lns. 52-61: “For example, along with various shader function examples that have been mentioned or described above (e.g., fog effects, color correction, etc.), various other shader functions may also be implemented by shader components to modify position and/or color (e.g., hue, saturation, brightness, contrast, etc.) of any pixels, vertices, and/or textures of an object in a manner that allows for a final rendered image to be constructed and/or altered using algorithms defined in the shader 302 or 304 (or corresponding shader component included therein)”, Col. 10 Lns. 3-5: “rendering functions for certain common objects associated with particular shading considerations (e.g., human hair, clothing, animal fur, etc.)”). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mallick in view of Lipowezky and Zhang with using shader to accelerate in rendering of hair color of Anderegg to effectively improve the user experience by improving the rendering of effects and objects in a simulated 2D or 3D worlds. Regarding claim 15, dependent upon claim 13, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 13. However, Mallick in view of Lipowezky and Zhang does not explicitly teach a shader to color the respective pixels in the hair portion of the input image. Anderegg teaches a shader to color the respective pixels in the hair portion of the input image (Col. 9 Lns. 44-49: “For instance, certain shaders 302 and/or 304 and their corresponding shader components may be configured to execute on a GPU to program the GPU's rendering pipeline (as opposed to operating with a fixed-function pipeline that only allows for common geometry transforming and pixel-shading functions)”, Col. 9 Lns. 52-61: “For example, along with various shader function examples that have been mentioned or described above (e.g., fog effects, color correction, etc.), various other shader functions may also be implemented by shader components to modify position and/or color (e.g., hue, saturation, brightness, contrast, etc.) of any pixels, vertices, and/or textures of an object in a manner that allows for a final rendered image to be constructed and/or altered using algorithms defined in the shader 302 or 304 (or corresponding shader component included therein)”, Col. 10 Lns. 3-5: “rendering functions for certain common objects associated with particular shading considerations (e.g., human hair, clothing, animal fur, etc.)”). Regarding claim 20, dependent upon claim 18, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 18. However, Mallick in view of Lipowezky and Zhang does not explicitly teach a shader to color the respective pixels in the hair portion of the input image. Anderegg teaches a shader to color the respective pixels in the hair portion of the input image (Col. 9 Lns. 44-49: “For instance, certain shaders 302 and/or 304 and their corresponding shader components may be configured to execute on a GPU to program the GPU's rendering pipeline (as opposed to operating with a fixed-function pipeline that only allows for common geometry transforming and pixel-shading functions)”, Col. 9 Lns. 52-61: “For example, along with various shader function examples that have been mentioned or described above (e.g., fog effects, color correction, etc.), various other shader functions may also be implemented by shader components to modify position and/or color (e.g., hue, saturation, brightness, contrast, etc.) of any pixels, vertices, and/or textures of an object in a manner that allows for a final rendered image to be constructed and/or altered using algorithms defined in the shader 302 or 304 (or corresponding shader component included therein)”, Col. 10 Lns. 3-5: “rendering functions for certain common objects associated with particular shading considerations (e.g., human hair, clothing, animal fur, etc.)”). Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Mallick et al. (US 2012/0075331 A1, hereinafter Mallick) in view of Lipowezky ( Automatic Hair Colorization Using Chromaticity Distribution Matching, hereinafter Lipowezky), Zhang et al. (Deep Color Consistent Network for Low-Light Image Enhancement, hereinafter Zhang), Anderegg (US 11,069,119 B1, hereinafter Anderegg), and Anonymous (Array Texture OpenGL Wiki, hereinafter Array Texture). Regarding claim 7, dependent upon claim 6, Mallick in view of Lipowezky, Zhang and, Anderegg teaches all of the element as stated above regarding claim 6. However, Mallick in view of Lipowezky, Zhang and, Anderegg does not teach providing the mapping table to the shader using a 1D texture. Array Texture teaches providing the mapping table to the shader using a 1D texture (P.1 Para. 004: “Array texture are not usable from the fixed function pipeline; you must use a Shader to access them”, P. 1 Para. 006: “1D array textures are created by binding a newly-created texture object to GL_TEXTURE_1D_ARRAY, them creating storage for one or more mipmaps of the texture”, P.2 Para. 001-002: “Every row of pixel data in the "2D” array of pixels is considered a separate 1D layer. 2D array textures are created similarly; bind a newly-created texture object to GL_TEXTURE_2D_ARRAY, then use the “3D” image functions to allocate storage. The depth parameter sets the number of layers in the array”). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mallick in view of Lipowezky, Zhang and, Anderegg with storing information in a 1D array of Array Texture to effectively increase the efficiency of looking up texture. Regarding claim 8, dependent upon claim 7, Mallick in view of Lipowezky, Zhang, Anderegg, and Array Texture teaches all of the element as stated above regarding claim 7. Array Texture further teaches interpolating the mapping table (P.2 Para. 001-002: “Every row of pixel data in the "2D” array of pixels is considered a separate 1D layer. 2D array textures are created similarly; bind a newly-created texture object to GL_TEXTURE_2D_ARRAY, then use the “3D” image functions to allocate storage. The depth parameter sets the number of layers in the array”). Claims 11 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mallick et al. (US 2012/0075331 A1, hereinafter Mallick) in view of Lipowezky ( Automatic Hair Colorization Using Chromaticity Distribution Matching, hereinafter Lipowezky), Zhang et al. (Deep Color Consistent Network for Low-Light Image Enhancement, hereinafter Zhang), and Levinshtein et al. (US 2020/0320748 A1, hereinafter Levinshtein). Regarding claim 11, dependent upon claim 10, Mallick in view of Lipowezky and Zhang teaches all of the element as stated above regarding claim 10. However, Mallick in view of Lipowezky and Zhang does not explicitly teach processing the input image to define a hair mask for the hair portion; and processing the output image using a guided filter, the hair mask and the input image to recalculate edges of the hair portion to smooth the edges and limit a bleeding of hair colouring outside an original hair position from the input image. Levinshtein teaches processing the input image to define a hair mask for the hair portion; and processing the output image using a guided filter, the hair mask and the input image to recalculate edges of the hair portion to smooth the edges and limit a bleeding of hair colouring outside an original hair position from the input image (Fig. 3, Fig. 7, Para [0096]: “The matte output of Model 3 of the architecture of FIG. 5 was compared to the coarser mask output of Model 1 of the architecture of FIG. 3 and with that output of Model 1 with the addition of a guided filter. A guided filter is an edge-preserving filter and has a linear runtime complexity with respect to the image size. It takes only 5 ms to process a 224x224 image on iPad Pro. FIG. 7 shows an image table 700 depicting qualitative results of models of FIGS. 3 and 5. Shown is image 702 to be processed (an example of image 100). Image 704 is the mask from Model 1 of FIG. 3, without guided filter post-processing. Image 706 is the mask from Model 1, with added guided filter post-processing. Image 708 is the mask (or matte) output from Model 3 of FIG. 5”, Para [0095]: “However, the guided filter adds detail only locally near the edges of the mask. Moreover, the edges of the refined masks have a visible halo around them, which becomes even more apparent when the hair color has lower contrast with its surroundings. This halo causes color bleeding during hair recoloring. The architecture of FIG. 5 yields sharper edges (as seen in image 708) and captures longer hair strands, without the unwanted halo effect seen in guided filter post-processing.”; Observing Fig.7, 704 and 706, it can be seen that guided filter created sharper edges for the hair mask. Although in Para [0095], the prior art stated that there is still bleeding of color, it can be seen that 704 have even worse bleeding compared to 706. As such, the use of guided filter can be seen to have the effect of reducing bleeding of color, just like how the hair mask 708 have even less bleeding of color than 706 with sharper edges for the hair.). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mallick in view of Lipowezky and Zhang with including a guided filter to process the output image to reduce bleeding when coloring of Levinshtein to effectively improve the accuracy and efficiency of identifying hair portion of an portrait. Regarding claim 17, dependent upon claim 13, Mallick discloses all of the element as stated above regarding claim 13. However, Mallick in view of Lipowezky and Zhang does not explicitly teach hair mask computational circuitry to process the input image to define a hair mask for the hair portion; and a post-processing component comprising computational circuitry to apply a guided filter to the output image, wherein the guided filter uses the hair mask and the input image to recalculate edges of the hair portion to smooth the edges and limit a bleeding of hair colouring outside an original hair position from the input image. Levinshtein teaches hair mask computational circuitry to process the input image to define a hair mask for the hair portion; and a post-processing component comprising computational circuitry to apply a guided filter to the output image, wherein the guided filter uses the hair mask and the input image to recalculate edges of the hair portion to smooth the edges and limit a bleeding of hair colouring outside an original hair position from the input image (Fig. 3, Fig. 7, Para [0096]: “The matte output of Model 3 of the architecture of FIG. 5 was compared to the coarser mask output of Model 1 of the architecture of FIG. 3 and with that output of Model 1 with the addition of a guided filter. A guided filter is an edge-preserving filter and has a linear runtime complexity with respect to the image size. It takes only 5 ms to process a 224x224 image on iPad Pro. FIG. 7 shows an image table 700 depicting qualitative results of models of FIGS. 3 and 5. Shown is image 702 to be processed (an example of image 100). Image 704 is the mask from Model 1 of FIG. 3, without guided filter post-processing. Image 706 is the mask from Model 1, with added guided filter post-processing. Image 708 is the mask (or matte) output from Model 3 of FIG. 5”, Para [0095]: “However, the guided filter adds detail only locally near the edges of the mask. Moreover, the edges of the refined masks have a visible halo around them, which becomes even more apparent when the hair color has lower contrast with its surroundings. This halo causes color bleeding during hair recoloring. The architecture of FIG. 5 yields sharper edges (as seen in image 708) and captures longer hair strands, without the unwanted halo effect seen in guided filter post-processing.”; Observing Fig.7, 704 and 706, it can be seen that guided filter created sharper edges for the hair mask. Although in Para [0095], the prior art stated that there is still bleeding of color, it can be seen that 704 have even worse bleeding compared to 706. As such, the use of guided filter can be seen to have the effect of reducing bleeding of color, just like how the hair mask 708 have even less bleeding of color than 706 with sharper edges for the hair.). Relevant Prior Art Directed to State of Art Mallick et al. (US 8,884,980 B2, hereinafter Mallick) is prior art not applied in the rejection(s) above. Mallick discloses a system and method for digital hair coloring based on the color distribution of the input image. Kowalczyk et al. (US 10,217,244 B2, hereinafter Kowalczyk) is prior art not applied in the rejection(s) above. Kowalczyk discloses a method for computer-assisted hair coloring guidance Alashkar et al. (US 11,861,771 B2, hereinafter Alashkar) is prior art not applied in the rejection(s) above. Alashkar discloses a virtual hair extension system that blends the selected hair extension into the hair region in the input image. Liu et al. (US 11,410,345 B2, hereinafter Liu) is prior art not applied in the rejection(s) above. Liu discloses a system that segments the hair area of a portrait using neural network. Zhang (US 2019/0294916 A1, hereinafter Zhang) is prior art not applied in the rejection(s) above. Zhang discloses a graphic manipulation application that determines a salient object in an image. The graphic manipulation application also extracts multiple swatches from the image. In some cases, the graphic manipulation application computes selection scores for the multiple swatches by combining a computed likelihood score and a dominance score. Additionally or alternatively, the graphic manipulation application generates, based on the selection scores of the multiple swatches, a subset of the multiple swatches extracted from the image. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA CHEN whose telephone number is (703)756-5394. The examiner can normally be reached M-Th: 9:30 am - 4:30pm ET F: 9:30 am - 2:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, STEPHEN R KOZIOL can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J. C./Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Feb 14, 2023
Application Filed
Apr 02, 2025
Non-Final Rejection — §103
Jul 08, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Nov 17, 2025
Response after Non-Final Action
Dec 16, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602747
METHOD AND APPARATUS FOR DENOISING A LOW-LIGHT IMAGE
2y 5m to grant Granted Apr 14, 2026
Patent 12592090
COMPENSATION OF INTENSITY VARIANCES IN IMAGES USED FOR COLONY ENUMERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579614
IMAGING DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12579678
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 17, 2026
Patent 12573065
Vision Sensing Device and Method
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+26.1%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month