Prosecution Insights
Last updated: April 19, 2026
Application No. 18/702,251

ACCURATE COLOR REPRODUCTION IN COMPUTER-RENDERED HAIR

Non-Final OA §103
Filed
Apr 17, 2024
Examiner
NGUYEN, PHONG X
Art Unit
2617
Tech Center
2600 — Communications
Assignee
L'Oréal
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
297 granted / 397 resolved
+12.8% vs TC avg
Strong +25% interview lift
Without
With
+25.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
12 currently pending
Career history
409
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 8-14 recite a “computer-implemented of…” The examiner suggests the following amendment: “computer-implemented method of…” to fix the apparent typographical error. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6 and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Degenhard et al. (Pub. No. US 2016/0110915), in view of Romaszko et al. (“Vision-as-Inverse-Graphics: Obtaining a Rich 3D Explanation of a Scene from a Single Image”, Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 851-859). Regarding claim 1, Degenhard discloses a computer-implemented method of rendering hair, the method comprising: obtaining, by a computing system, a captured image of a hair swatch (Par. 134: “The camera 150 is configured to obtain an image of a plurality of real fibers 160 and to communicate this image to the processor 120 of the computer 130”, par. 135: “In the example of FIGS. 1 and 2, the fibers in question comprise human hair”, and par. 144: “In step 221, the camera 150 captures an image of the hair fibers 160 wrapped around the material holder 165”); using(Par. 144: “In step 222, the processor 120 processes the image to determine an initial set of estimated fiber parameters”); using, by the computing system, a non-differentiable renderer to generate a rendered image based on the set of estimated hair parameters (Par. 144: “The method then follows an “analysis by synthesis” approach. In step 223, the processor 120 simulates, using the computational model and the initial estimated fiber parameters, how light would interact with a plurality of fibers having those initial estimated parameters”, and par. 153: “The interactions of light with the fibers may be simulated and the synthetic image rendered using a Monte Carlo path tracing algorithm”. Note that claim 4 of the present application further clarifies that the non-differentiable renderer includes a path tracing renderer) and a set of scene parameters (Par. 202: “In a sequence of images of hair fibers, at least one (or a combination of any two or more) of the following may vary: the position or orientation of the head; the positions or shapes of fibers on the head; the position or orientation of a light source; and the position or orientation of a virtual camera, in the simulation”); and providing, by the computing system, the rendered image for display on a display device (Par. 95: “The further synthetic image showing the fibers with the adjusted parameters is preferably displayed to the user on a display screen”), wherein the set of estimated hair parameters include at least one of: one or more dye color values; a dye concentration value; a melanin concentration value; and a eumelanin/pheomelanin ratio value (Par. 16: “The modification of the first set of fiber parameters may represent or approximate a physical modification of the real fibers—for example, a treatment applied to the real fibers, such as a chemical treatment. The chemical treatment may comprise at least one of (or any combination of two or more of): a pigment, structural color, dye, polymeric dye, optical effect and bleaching”, and par. 87: “each set of parameters comprises at least one color parameter related to a color of the respective plurality of fibers”). Degenhard does not explicitly disclose using an inverse graphics renderer to determine the set of estimated hair parameters. However, it should be noted that Degenhard discloses using an “analysis by synthesis” approach (See par. 144 again). In the same field of computer graphics, Romaszko teaches an inverse graphics approach to the problem of scene understanding, wherein object parameters (e.g. shape, appearance, and pose) and scene parameters (e.g. camera parameters and lighting) are obtained via a vision-as-inverse-graphics (VIG) or analysis-by-synthesis framework (See the abstract and section 1. Introduction). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to implement the analysis-by-synthesis of Degenhard with the inverse graphics renderer taught by Romaszko. The motivation would have been to produce a compact and interpretable representation of a scene in terms of an arbitrary number of objects (Romaszko, 3rd paragraph of section 1. Introduction). Regarding claim 2, Degenhard in view of Romaszko teaches the computer-implemented method of claim 1, wherein the set of estimated hair parameters include at least one heterogeneity hair parameter (Degenhard, par. 36: “Alternatively or in addition, it can model variations in optical properties (such as color or lightness) along the length of individual hairs, for some or all of the hairs—for example, a differently colored root, at an end of the hair fiber nearest to the scalp”). Regarding claim 3, Degenhard in view of Romaszko teaches the computer-implemented method of claim 1, wherein the inverse graphics encoder includes at least one of a fully convolutional neural network and a transformer network (Romaszko, page 855, section Core details of the networks: “Our main recognition networks are based on VGG-16 network and were optimized on a validation dataset. The networks use all 13 convolutional layers of VGG for 128 × 128 input, but without the last two max-pooling layers in order to be more spatially accurate, resulting in an output of size 512 × 16 × 16. We then train three convolutional layers with 50 filters each of a size 512/50/50 × 6 × 6”). Regarding claim 4, Degenhard in view of Romaszko teaches the computer-implemented method of claim 1, wherein the non-differentiable renderer includes at least one of a ray tracing renderer, a path tracing renderer, a ray casting renderer, and a light transport renderer (Degenhard, par. 153: “The interactions of light with the fibers may be simulated and the synthetic image rendered using a Monte Carlo path tracing algorithm”). Regarding claim 5, Degenhard in view of Romaszko teaches the computer-implemented method of claim 1, wherein the set of scene parameters indicate at least one of a camera position and a hairstyle (Degenhard, par. 202: “multiple synthetic images can be generated from respective different viewpoints (that is, positions and orientations of a virtual camera, in the simulation)”). Regarding claim 6, Degenhard in view of Romaszko teaches the computer-implemented method of claim 5, wherein the hairstyle is a straight hairstyle (Degenhard, par. 23: “This can provide a type of “analysis-by-synthesis” approach to measuring the fiber parameters of real fibers. The parameters are derived iteratively, by simulating how light would interact with fibers assuming that they had the currently estimated parameters, and comparing the results of the simulation with the appearance of the real fibers. The parameters are then updated, and the simulation and comparison are performed again, to see if the new parameters match the real fibers better or worse. After several iterations, this approach can converge to a set of parameters that accurately models the real fibers”. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention that the appearance of the real fibers would include curly hair and straight hair in order to simulate a variety of hairstyles). Claims 15, 19 and 20 recite similar limitations as respective claims 1, 3 and 4, but are directed to a system comprising a camera and a hair-rendering computing system. Since Degenhard also teaches such a system (See Fig. 1), these claims could be rejected under the same rationales set forth in the rejection of their respective claims. Regarding claim 16, Degenhard in view of Romaszko teaches the system of claim 15, wherein the camera system includes: a camera (Fig. 1 of Degenhard shows a camera 150); and at least one lighting source arranged to illuminate the hair swatch (Degenhard, par. 138: “The optical characteristics of the hair fibers are described by the fiber parameters and the light illuminating the fibers on the head is determined from the light source parameters”). Regarding claim 17, Degenhard in view of Romaszko teaches the system of claim 16, further comprising: a surface holder configured to hold the hair swatch at a fixed location in relation to the camera and the at least one lighting source (Degenhard, par. 140: “FIG. 3 illustrates an exemplary apparatus for measuring the optical properties of a sample of fibers, such as the hair fibers 160 in FIG. 1. The apparatus comprises a material holder 165 comprising a curvilinear surface; a collimated light source 170; and the camera 150”). Regarding claim 18, Degenhard in view of Romaszko teaches the system of claim 17, wherein the surface holder has a flat surface or a curved surface (Degenhard discloses in par. 140 that the material holder could have a curvilinear surface). Claim(s) 7-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Degenhard, in view of Romaszko, and further in view of Öztireli et al. (Pub. No. US 2021/0065434). Regarding claim 7, Degenhard in view of Romaszko teaches the computer-implemented method of claim 1, wherein the inverse graphics encoder is trained by performing actions comprising: determining a set of hair parameters (Degenhard, par. 22: “obtaining an image of the plurality of real fibers; processing the image to determine a set of estimated fiber parameters”); generating a rendered (Degenhard, par. 22: “simulating, using the model and the estimated fiber parameters, how light would interact with fibers with the estimated parameters”. See also pars. 144 and 153 of Degenhard cited in the rejection of claim 1); determining estimated hair parameters based on the rendered (Degenhard, par. 22: “computing an error metric comparing the result of the simulation with the content of the image; updating the estimated parameters, based on the error metric”); and optimizing the (Degenhard, par. 22: “iterating the steps of simulating, computing, and updating, until the estimated fiber parameters converge to stable values; and determining the first set of fiber parameters based on the converged estimated fiber parameters”). Additionally, Romaszko teaches using the rendered image as a training image; (See section 4. Experimental setup. When combined with Degenhard, object parameters would include hair parameters). Romaszko also teaches using an inverse graphics encoder to determine estimated object parameters, as explained in the rejection of claim 1. Degenhard in view of Romaszko, however, does not teach optimizing the inverse graphics encoder based on a gradient of a loss function. In the same field of inverse rendering, Öztireli teaches this limitation (See pars. 44-46). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to further modify Degenhard by optimizing the inverse graphics encoder based on a gradient of a loss function, as taught by Öztireli. The motivation would have been to provide a more effective technique for determining attributes associated with 3D scenes based on 2D images (Öztireli, par. 8). Claim 8 could be rejected using the same rationale set forth in the rejection of claims 1 and 7 above. Regarding claim 9, Degenhard in view of Romaszko and Öztireli teaches the computer-implemented method of claim 8, wherein determining the set of hair parameters includes randomly sampling values for the set of hair parameters (Degenhard, par. 157: “In step 241, the parallel calculations are initialized by the processor 120. This comprises dividing the simulation task between the first processor 120a and the second processor 120b. In particular, the processor 120 generates a first seed for the first processor 120a and a second different seed for the second processor 120b. Each seed will be used by the respective processor in a random number generator that generates samples according to the Monte Carlo path-tracing algorithm. Because the seeds are different, each processor 120a, 120b will generate a different set of samples. Each sample traces the path of a ray of light from a light source to a pixel in the synthetic image, via one or more interactions with the plurality of fibers (in the virtual world being simulated)”). Regarding claim 10, Degenhard in view of Romaszko and Öztireli teaches the computer-implemented method of claim 8, wherein determining the set of hair parameters includes selecting hair parameters from a uniform distribution for each hair parameter (Romaszko, section 4. Experimental setup, first paragraph: "Object colours are sampled uniformly in RGB space"). Regarding claim 11, Degenhard in view of Romaszko and Öztireli teaches the computer-implemented method of claim 8, wherein optimizing the inverse graphics encoder includes repeating the actions of determining the set of hair parameters, generating the rendered training image, determining the estimated hair parameters, and determining the gradient of the loss function for multiple sets of hair parameters (Öztireli, pars. 44-46). Regarding claim 12, Degenhard in view of Romaszko and Öztireli teaches the computer-implemented method of claim 8, wherein optimizing the inverse graphics encoder includes using an Adam optimizer (Romaszko, page 855, section Core details of the networks: "Networks are trained by SGD with Adam optimisation algorithm"). Regarding claim 13, Degenhard in view of Romaszko and Öztireli teaches the computer-implemented method of claim 8, wherein the inverse graphics encoder includes at least one of a fully convolutional neural network and a transformer network (See the rejection of claim 3). Regarding claim 14, Degenhard in view of Romaszko and Öztireli teaches the computer-implemented method of claim 8, further comprising: obtaining, by the computing system, a captured image of a hair swatch; using, by the computing system, the trained inverse graphics encoder to determine a set of estimated hair parameters based on the captured image; and using, by the computing system, the non-differentiable renderer to generate a rendered image based on the set of estimated hair parameters and a set of scene parameters (See the rejection of claim 1). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHONG X NGUYEN whose telephone number is (571)270-1591. The examiner can normally be reached Mon-Fri 8am - 5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHONG X NGUYEN/ Primary Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Apr 17, 2024
Application Filed
Mar 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585691
PERSONALIZED PRESENTATION CONTENT CONSUMPTION IN A VIRTUAL REALITY (VR) ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579730
Ray-Box Intersection Circuitry
2y 5m to grant Granted Mar 17, 2026
Patent 12569299
METHOD FOR GENERATING SURGICAL SIMULATION INFORMATION AND PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573136
SCENE TRACKS FOR REPRESENTING MEDIA ASSETS
2y 5m to grant Granted Mar 10, 2026
Patent 12560998
DISPLAY DIMMING CONTROL APPARATUS, DISPLAY DIMMING CONTROL METHOD, RECORDING MEDIUM, AND DISPLAY DIMMING SYSTEM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+25.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month