Prosecution Insights
Last updated: April 19, 2026
Application No. 18/396,374

MAKEUP VIRTUAL TRY ON METHODS AND APPARATUS

Non-Final OA §103
Filed
Dec 26, 2023
Examiner
NGUYEN, DAVID VAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
L'Oréal
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
14 currently pending
Career history
14
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
78.6%
+38.6% vs TC avg
§102
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18/396,374, filed on 12/28/2022. Election/Restrictions Applicant’s election without traverse of the restriction requirement dated 12/12/2025 in the reply filed on 1/22/2026 is acknowledged. Specification The disclosure is objected to because of the following informalities: In Par 39, Line 6, “…made available for use selection” should be written as “…made available for use-r selection”. In par 116, Line 3, “…replacing with an overly” should be written as “…replacing with an overlay”. Appropriate correction is required. Claim Objections Claim 8 is objected to because of the following informalities: Claim 8 uses the term “such as” to describe examples or preferences of shape changes to particular facial features. However, these examples or preferences should be properly set forth in the specification rather than the claims. If stated in the claims, the examples and preferences may lead to confusion over the intended scope of the claim. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, and 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al (US 20200401790 A1) and Li et al (US 20200342209 A1), hereinafter Hu and Li respectively. Regarding claim 1, Hu teaches a computer implemented method comprising executing by one or more processors (processor, paragraph 0007) the steps of: rendering an output image derived from the input image using a rendering pipeline (“obtaining an adjustment amplitude by which the adjuster performs dynamic adjustment on the target face portion, and displaying a change effect of the target face portion based on the dynamic adjustment in a display interface” – Abstract. [NOTE: output image is displayed after performing adjustment on target (input) image). Fig 8 also shows the rendering pipeline]), the output image derived by applying one or more shape changes to a particular facial feature (“For example, a deformation operation may be performed on a mouth width, an eye size, or a face size” – Par 40, Lines 7-8), the one or more shape changes determined by: mapping a grid of spaced grid points to pixels of the particular facial feature and any associated facial features (“performing interpolation processing on pixels in the deformed grid region, to obtain pixel values corresponding to the deformed grid region; and constructing a deformed face portion according to the deformed grid region, and performing pixel rendering on the deformed face portion according to the pixel values, to obtain the deformed face image.” – Par 95); and warping at least some of the spaced grid points using respective shape changing functions, the warping changing location of at least some of the spaced grid points for changing locations of the face points of the particular facial feature (“adjustment positions of vertices or a plurality of pixel points of the target grid region included in each deformation unit are separately calculated according to the deformation type and the deformation intensity.” – Par 91, Lines 42-25. [NOTE: Hu discloses the changing of positions of vertices on the grid based on the face deformation changes. Face deformation can be understood by one of ordinary skill in the art as face warping. Rendering of pixel points of the face image is performed by interpolation processing to get the pixel values corresponding to the deformed grid, Par 96 ]) and wherein the rendering determines output pixels for the particular facial feature and any associated facial feature for the output image in response to the warping (“Then, the deformed face portion is constructed according to the deformed grid region, and the pixel rendering is performed on the deformed face portion according to the pixel values, to obtain the deformed face image.” – Par 96, Lines 9-12. [NOTE: pixels of the facial features are rendered to output an image that reflects the face warping]). Hu does not teach processing an input image to localize facial features using a face tracking engine having one or more deep neural networks to respectively produce face points defining a contour for each of the facial features localized. However, Li teaches processing an input image to localize facial features using a face tracking engine having one or more deep neural networks to respectively produce face points defining a contour for each of the facial features localized (“configure the computing device to process an image to determine respective locations of each of a plurality of landmarks by: processing the image using a Convolutional Neural Network (CNN)” – Par 6, Lines 4-7. [NOTE: The abstract specifically states the methods for facial landmarks with CNNs]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Hu to incorporate Li’s teaching of using a CNN to produce face points for each facial feature. It is common in the art to use deep neural network face tracking engines to produce facial landmarks because of accuracy of detecting features such as eyes, noses, and mouths. Using the CNN as described by Li would have the predictable result of generating facial points that best depict where each complex face feature is on the input face image. Regarding claim 9, the claim describes a system performs a function with the steps of claim 1. Therefore, system claim 9 corresponds to the method disclosed in claim 1 and is rejected for the same reasons obviousness as used above. Regarding claim 4, Hu in view of Li teach the method of claim 1. Hu further teaches wherein the method comprises providing a user interface to receive input to define shape parameters for the one or more shape changes and wherein the rendering is responsive to the user input (“In the deformation operation interface, a user may select a to-be-deformed target face portion, an operation type, and an adjustment parameter. The deformation operation interface includes the portion selection interface and the type setting interface. The portion selection interface may be used for receiving the operation instruction, and the type setting interface may be used for selecting the operation type, and the like.” – Par 64, Lines 7-15. [NOTE: Hu discloses that the user can make selections for any type of deformation operation to which the deformation unit will process and apply to the initial input image and display on the interface to the user.]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to further incorporate the teachings of Hu to provide the user an interface to input deformation operations and display the changes. User Interfaces are common tools used for their intuitive nature. Using a UI to receive input from the user for their selection of face deformations and updating the image to reflect the modifications simplifies the process for the user. Regarding claim 7, Hu in view of Li teach the method of claim 1. Hu further teaches wherein the particular facial feature defines a first facial feature and the step of rendering is repeated in respect of a second facial feature to product an output image having at least two shape changed facial features (“As shown in FIG. 10a, when the user selects a face and a corresponding whole of the face, the operation instruction may be generated, or as shown in FIG. 10b, when the user selects eyes and a corresponding eye height of the eyes, the operation instruction may be generated. ” – Par 109, Lines 6-10, Fig 10a-10b. [NOTE: Fig 10a-10b shows a display of a face with both a slimmer chin and bigger eyes]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to further incorporate the teachings of Hu to repeat the rendering step so that a second facial feature modification can be applied alongside the first facial feature modification. By allowing for multiple facial feature modifications to be displayed simultaneously, the user is given more customization to the face warping of the initial input image. It would also allow the user to make adjustments and see how each facial feature modification look together rather than separately. Regarding claim 8, Hu in view of Li teaches the method of claim 1. Hu further teaches wherein the shape changes applied to the particular facial feature comprise any one of a brow shaping, a nose shaping, such as a nostril slimming or a bridge slimming, a face contour change, such as jaw slimming, an eye or eyelid change such as a vertical or a horizontal eye enlargement; or a lip change such as lip plumping (“For example, the operation types of the deformation corresponding to the face portion may include adjusting an angle, an eyebrow distance, and a position of the eyebrows, adjusting a size, a width, a thickness, and a position of the mouth, adjusting a size, a wing, a bridge, a tip, and a position of the nose, adjusting a size, an eye height, an eye distance, an inclination, an eye brightening degree, and eye bag removing of the eyes, and adjusting an overall contour, cheeks, a chin, and a forehead of the face” – Par 51, Lines 10-18). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to further incorporate the teachings of Hu to include various types of facial feature shape changes such has eyebrow sizing, mouth width, nose positioning, eye height, etc. Adding various facial shape change possibilities allow for more creative control for the user to be able to modify and warp the initial image to their preferences. Claim(s) 2 and 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu, Li, and Sartori et al (US 11315173 B2), hereinafter Sartori. Regarding claim 2, Hu in view of Li teaches the method of claim 1. Hu in view of Li does not teach wherein the face tracking engine and rendering pipeline are components of a VTO application for simulating the effects of a makeup product applied to facial features. However, Sartori teaches wherein the face tracking engine and rendering pipeline are components of a VTO application for simulating the effects of a makeup product applied to facial features (“Described herein are techniques for generating, calibrating, and applying virtual makeup products that simulate, in a photo realistic way, the application of real-world makeup products.”, Col 3, Lines 5-8. [NOTE: Sartori teaches a “virtual try-on” (VTO) application. To apply the makeup effects onto the input image, a rendering pipeline must exist to modify the image. After the combination, the face tracking engine as taught by Li and the VTO application as taught by Sartori can be added to Hu’s facial warping system so that makeup effects can be simulated on facial features that have been warped.]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Hu by incorporating the teachings of Sartori to use a rendering pipeline of a VTO application to apply makeup effects to the face features after warping. Using the facial tracking engine to determine facial parts and a rendering pipeline for a VTO application will benefit the face warping system by allowing users to apply makeup to warped areas of the face. The VTO rendering pipeline could apply the makeup effect whether it is through eyeshadow, lipstick, blushes, etc. so that a user can simulate makeup effects without the need to physically put the makeup on. Regarding claim 3, Hu in view of Li and Sartori teach the method of claim 2. Sartori further teaches wherein the rendering pipeline renders a makeup effect to the particular facial feature as shape changed such that the output image comprises the particular facial feature as shaped changed and with the makeup effect (“Process 400 is then performed to apply the virtual lipstick product to the base image resulting in a composite image including the base image and one or more makeup images that together simulate the application of the corresponding real-world makeup product. The user then selects a virtual eyeshadow product. Process 400 is then reapplied to generate one or more additional makeup images that simulate the application of the corresponding real-world eyeshadow product.” – Col 13, Lines 48-56. [NOTE: Sartori discloses that a user can select multiple makeup effects which is then combined together to create a composite image. This blending of makeup effects shows the rendering of a new image to display all of the makeup affects chosen. After the combination, the CNN face tracking engine taught by Li and the rendering pipeline for a VTO application as taught by Sartori can be used with the face warping system as taught by Hu to apply makeup effects to the warped face image to teach this claim element.]). It would have been obvious to one of ordinary skill in the art to modify Hu by further incorporating the teachings of Sartori to have the rendering pipeline apply makeup effects to warped facial features. This adds more utility features to the VTO application by giving more customizability for the user as they would not only be able to simulate makeup effects on their face, but also simulate makeup on face modifications based on their preferences. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu, Li, and Kalarot et al (US 11907839 B2), hereinafter Kalarot. Regarding claim 5, Hu in view of Li teaches the method of claim 4. However, Hu does not teach wherein at least some of the shape changing functions perform one or more of: curve matching to match a middle curve along a middle of the contour of the particular facial feature with a target curve defined by the shape parameters of the one or more shape changes, attenuating location changes to spaced grid points responsive to distance to the target curve; or point matching to match a discrete point of the contour of the particular facial feature to a target point defined by the shape parameters of the one or more shape changes; or area expansion or area compression to expand or compress an area along a face point curve or about a particular pixel, responsive to the shape parameters, the area expansion or area compression attenuated by an attenuation function responsive to distance from the face point curve or the particular pixel. However, Kalarot teaches point matching to match a discrete point of the contour of the particular facial feature to a target point defined by the shape parameters of the one or more shape changes (“The warper 204 receives an input image 106 with detected landmarks and a generated image 139 with detected landmarks from the landmark detector 202. The warper 204 warps one of the images to align the landmarks in the input image 106 with the landmarks in the generated image 139.” – Col 12, Lines 64-67. [NOTE: Kalarot teaches a point matching warp, but does not teach that is done on a user defined modified image based on the user’s input for defining the shape parameter. After the combination, the modified image as defined by the user as disclosed by Hu and the point matching to match landmarks as disclosed by Kalarot would then teach point matching to match a discrete point of the contour of the particular facial feature to a target point defined by the shape parameters of the one or more shape changes]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Hu by incorporating the teachings of Kalarot to use point matching to match a point of a facial feature with a target point defined by shape changing parameters. With point matching, the discrete facial feature point can be aligned to the target point. This will allow user-specified shape parameters to produce consistent and accurate deformations. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu, Li, Kim et al (US 10223767 B2) and Liu et al (US 20210390789 A1), hereinafter Kim and Liu respectively. Regarding claim 6, Hu in view of Li teaches the method of claim 1. However. Hu does not teach wherein determining the pixels comprises fitting triangles to vertices defined by the spaced grid points as warped and UV mapping the pixels of the particular facial feature and any associated feature onto the vertices. However, Kim teaches wherein determining the pixels comprises fitting triangles to vertices defined by the spaced grid points as warped (“The landmark points are representative of points, nodes, or markers that are positioned on a face and are combinable with other landmark points to form geometric shapes, such as triangles, that define the face mesh 116.” – Col 5 , Lines 54-58. [NOTE: Kim discloses the construction of triangles based on landmark points or vertices that define a face mesh. One of ordinary skill in the art could then use the same process for constructing triangles using face points with the spaced grid points taught by Hu.]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application modify Hu by incorporating the teachings of Kim to determine pixels by fitting triangles onto vertices defined by the space-grid points as warped. By fitting triangles to the vertices of the space grid, the warping can be done locally (in the area of interest such as the eye, nose, mouth etc.) without affecting other areas. This gives more control to the user by allowing them to modify certain features of their face without changing the entirety of it. Hu in view of Li and Kim still does not teach UV mapping the pixels of the particular facial feature and any associated feature onto the vertices. However, Liu further teaches UV mapping the pixels of the particular facial feature and any associated feature onto the vertices (“For example, the UV face position map can include a 2D texture map that maps points (e.g., pixels) in the 2D texture map to vertices and/or coordinates of a 3D representation (e.g., a 3D mesh, model, surface, geometry, etc.) of the face”- Par 62, Lines 11-15. [NOTE: Although Liu teaches the UV mapping of 2D texture to vertices in 3D meshes, the same process can be applied for 2D textures to 2D surfaces as discloses in the present application]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Hu by incorporating the teachings of Liu to UV map the pixels of the facial feature onto the vertices. UV mapping is a common technique in the art for applying textures to 2D/3D shapes. By UV mapping the pixels of the facial feature onto the vertices, the makeup effects chosen by the user can accurately be applied to the correct areas even during facial deformation. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID V. NGUYEN whose telephone number is 571-272-6111. The examiner can normally be reached M-F 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID VAN NGUYEN/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Dec 26, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573160
INTIMACY-BASED MASKING OF THREE DIMENSIONAL (3D) FACE LANDMARKS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month