Prosecution Insights
Last updated: April 19, 2026
Application No. 18/715,151

MODELING OF THE LIPS BASED ON 2D AND 3D IMAGES

Final Rejection §102§103
Filed
May 31, 2024
Examiner
MAZUMDER, TAPAS
Art Unit
2615
Tech Center
2600 — Communications
Assignee
L'Oréal
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
342 granted / 418 resolved
+19.8% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
16 currently pending
Career history
434
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 418 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 10-13 are rejected under 35 U.S.C. 102[(a)(1)/102(a)(2) as being anticipated by Samain et al. ( US Patent Publication: 20200260838, “Samain”). Regarding claim 10, A method for the computerized modeling of at least one area of the lips, the method comprising: (1) capturing an input 2D image, provided with a first dimensional reference frame, of at least part of the surface of the lips, (“0044] In order to perform the 3D scan it is possible to use any 3D scanner capable of capturing the volume and the dimensions of the zone concerned. For preference, use is made of a 3D scanner able also to capture the color and appearance of the zone concerned, so as to acquire one or more images providing information as to the location of the composition.”) (ii) capturing an input 3D image, provided with a second dimensional reference frame, of the at least part of the surface of the lips, [(“0044] In order to perform the 3D scan it is possible to use any 3D scanner capable of capturing the volume and the dimensions of the zone concerned. For preference, use is made of a 3D scanner able also to capture the color and appearance of the zone concerned, so as to acquire one or more images providing information as to the location of the composition.”) and (iii) | generating an output 3D image of the area of the lips from the input 2D image and from the input 3D image, the contour of the lips. (“[0047] Either one of the methods may comprise a step involving allowing a user to model a surface obtained from the 3D scan, notably the outline thereof, and thus generate the reworked surface.” “[0050] Either one of the methods may involve outlining, preferably automatically, the lips from at least one image thereof. A curve derived from the outlining, and known as a «spline», may be created, notably having at least 10 control points, and better, at least 20 control points. If appropriate, an operator is allowed to modify the location of these control points, for example by working on an on-screen depiction of the lips.”) Regarding claim 11, Samain teaches, determining the contour of the lips in the output 3D image based on the input 2D image. ( Samain teaches determining control points from 2D image. See Samain, ““[0050] Either one of the methods may involve outlining, preferably automatically, the lips from at least one image thereof. A curve derived from the outlining, and known as a «spline», may be created, notably having at least 10 control points, and better, at least 20 control points.” The contour is estimated in the output 3D image based on interpolation of the control points. “[0051] Either one of the methods may involve determining a plurality of points on the natural outline of the lips, notably from at least one image thereof, and estimating the natural outline of the lips by interpolation between these points.”) Regarding claim 12, Samain teaches, determining depth of the lips in the output 3D image based on the input 3D image. (Samain, “[0211]….. A 3D surface and the volume of the applicator are generated from the result of the scan, “) Regarding claim 13, A system for the computerized 3D modeling of at least one area of the lips, in manufacture of a personalized applicator for applying a cosmetic product to the lips, the system comprising a mobile 2D and 3D image-capturing device, in which system, once the mobile image-capturing device has been placed in a predetermined position with respect to the lips, the mobile image-capturing device is able to capture an input 2D image, provided with a first dimensional reference frame, of at least part of the surface of the lips, and to capture an input 3D image, provided with a second dimensional reference frame, of the at least part of the surface of the lips, (“0044] In order to perform the 3D scan it is possible to use any 3D scanner capable of capturing the volume and the dimensions of the zone concerned. For preference, use is made of a 3D scanner able also to capture the color and appearance of the zone concerned, so as to acquire one or more images providing information as to the location of the composition.” “[0173] During a step 11, a 3D scan of the topography of at least part of the surface of the lips of the user is taken using a 3D scanner 31, for example an Artec 3D “Spider” color scanner. Prior to this step 11, a composition may have been applied to at least part of the user's lips, as detailed later on. The 3D scan may include the lips and at least part of the skin around the lips.”) a processor (“[0174] During a step 12, a 3D surface is generated from the scan obtained in step 11, for example using 3D software of the Geomagic's Wrap type, and recorded in a file that can be read by a CNC machine, notably a micro-machining machine 35 or by a 3D printer 32. “ Fig. 28 displays a computer having a processor to generate and display generated surface) programmed to generate an output 3D image of the area of the lips from the input 2D image and from the input 3D image, by determining a contour of the lips in the output 3D image based on the input 2D image. ((“[0047] Either one of the methods may comprise a step involving allowing a user to model a surface obtained from the 3D scan, notably the outline thereof, and thus generate the reworked surface.” “[0050] Either one of the methods may involve outlining, preferably automatically, the lips from at least one image thereof. A curve derived from the outlining, and known as a «spline», may be created, notably having at least 10 control points, and better, at least 20 control points. If appropriate, an operator is allowed to modify the location of these control points, for example by working on an on-screen depiction of the lips.”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-9 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Samain in view of Takeguchi et al. (US Patent Publication: 20070211944, “Takeguchi”). Regarding claim 1, Samain teaches, A method for manufacturing a personalized applicator for applying a cosmetic composition to lips, this applicator comprising an application surface made of a material configured to become laden with composition, the method comprising: (1) capturing an input 2D image, provided with a first dimensional reference frame, of at least part of the surface of the lips; (“0044] In order to perform the 3D scan it is possible to use any 3D scanner capable of capturing the volume and the dimensions of the zone concerned. For preference, use is made of a 3D scanner able also to capture the color and appearance of the zone concerned, so as to acquire one or more images providing information as to the location of the composition.”) (ii) capturing an input 3D image, provided with a second dimensional reference frame, of the at least part of the surface of the lips; [(“0044] In order to perform the 3D scan it is possible to use any 3D scanner capable of capturing the volume and the dimensions of the zone concerned. For preference, use is made of a 3D scanner able also to capture the color and appearance of the zone concerned, so as to acquire one or more images providing information as to the location of the composition.”) and (iii) producing at least from the input 2D image, provided with the first dimensional reference frame, and from the input 3D image, provided with the second dimensional reference frame, at least part of the applicator or of a mold used to manufacture it, by machining a preform or through additive manufacture, (“[0047] Either one of the methods may comprise a step involving allowing a user to model a surface obtained from the 3D scan, notably the outline thereof, and thus generate the reworked surface.” “[0050] Either one of the methods may involve outlining, preferably automatically, the lips from at least one image thereof. A curve derived from the outlining, and known as a «spline», may be created, notably having at least 10 control points, and better, at least 20 control points. If appropriate, an operator is allowed to modify the location of these control points, for example by working on an on-screen depiction of the lips.” [0026] b) performing an optical acquisition of the topography of the surface thus covered and of at least one image providing information as to the location of the composition, and [0027] c) from this acquisition creating the applicator or a mold intended for the manufacture thereof “) but doesn’t expressly teach determining at least one landmark visible both in the input 2D image and in the input 3D image and assigning the at least landmark a dimensional coordinate in an output 3D image.” Takeguchi teaches, determining at least one landmark visible both in the input 2D image and in the input 3D image and assigning this landmark a dimensional coordinate in an output 3D image. (“[0025] Therefore, in this embodiment, a relation between the face image and the three-dimensional shape is calculated using the coordinates of the reference feature points on the acquired face image and the positions of the reference feature points on the face shape stored in the three-dimensional shape information holding unit 200. [0026] Firstly, as shown in the upper left in FIG. 4, three-dimensional shape information on a face and positions of the reference feature points on the three-dimensional shape are prepared in the three-dimensional shape information holding unit 200. The three-dimensional shape information of the face may be obtained by measuring the three-dimensional shape of a person in the input image, or may be a representative three-dimensional shape of the face obtained, for example, by averaging several three-dimensional shapes or by preparing with modeling software.…….[0028] For example, when six feature points are obtained from the input image of the face as shown in FIG. 4, when the coordinates of the six points are represented by vectors a1, a2, . . . a6, the measurement matrix W is W=[a1, a2, . . . a6], that is, a matrix of 2.times.6. When the coordinates of the positions of the feature points on the corresponding three-dimensional model are represented by vectors b1, b2, . . . b6, the shape matrix S is S=[b1, b2, . . . b6], that is, a matrix of 3.times.6. ….[0029] When the obtained movement matrix M is used, the position "a" of the point on the two-dimensional image (two-dimensional vector) corresponding to an arbitrary point "b" on the three-dimensional shape (three-dimensional vector) can be calculated from an expression (2). a=Mb (2)”) Samain and Takeguchi are analogous as they are from the field of image processing. Therefore it would have been obvious for an ordinary skilled person in the art before effective filing date of the claimed invention to have modified Samain to have included determining at least one landmark visible both in the input 2D image and in the input 3D image and assigning this landmark a dimensional coordinate in an output 3D image as taught by Takeguchi. The motivation to include the modification is to make sure major feature point of a face image exists on both 2D and 3D scan for generating output images with correct position of facial feature. Regarding claims 2, Samain as modified by Takeguchi teaches, determining (113) a plurality of points of the contour of the lips, based on the input 2D image, and estimating the contour of the lips in the output 3D image, through interpolation based on these points. ( Samain teaches determining control points from 2D image. See Samain, ““[0050] Either one of the methods may involve outlining, preferably automatically, the lips from at least one image thereof. A curve derived from the outlining, and known as a «spline», may be created, notably having at least 10 control points, and better, at least 20 control points.” The contour is estimated in the output 3D image based on interpolation of the control points. “[0051] Either one of the methods may involve determining a plurality of points on the natural outline of the lips, notably from at least one image thereof, and estimating the natural outline of the lips by interpolation between these points.”) Regarding claim 3, Samain as modified by Takeguchi teaches, determining depth of the lips in an output 3D image based on the input 3D image. . (Samain, “[0211]….. A 3D surface and the volume of the applicator are generated from the result of the scan, “) Regarding claim 4, Samain as modified by Takeguchi teaches, detecting, in the input 2D image, multiple first landmarks defining a contour of the lips and multiple second landmarks located on either side of a separating line separating the lips, in order to produce the contour of the lips in an output 3D image. (“Samain, ““[0050] Either one of the methods may involve outlining, preferably automatically, the lips from at least one image thereof. A curve derived from the outlining, and known as a «spline», may be created, notably having at least 10 control points, and better, at least 20 control points.” “[0186] An automatic outlining of the lips from an image thereof can be produced, it being possible to generate a “spline”, having numerous control points, for example more than twenty or so, as illustrated in FIG. 7.” Fig. 7 implicates these control points are from either side of separating line.) Regarding claim 5, Samain as modified by Takeguchi teaches, characterized in that it comprises detecting, in the input 3D image, multiple third landmarks defining commissures of the lips and multiple fourth landmarks located on the longitudinal axis of the lips, in order to produce depth of the lips in an output 3D image. (“Samain, ““[0050] Either one of the methods may involve outlining, preferably automatically, the lips from at least one image thereof. A curve derived from the outlining, and known as a «spline», may be created, notably having at least 10 control points, and better, at least 20 control points.” “[0186] An automatic outlining of the lips from an image thereof can be produced, it being possible to generate a “spline”, having numerous control points, for example more than twenty or so, as illustrated in FIG. 7.”) Regarding claim 6, Samain as modified by Takeguchi teaches, displaying a printable and/or manufacturable output 3D image.(“Samain, [0053] Either one of the methods may involve displaying the natural surface of the scanned lips and/or a make-up result obtained with the applicator and/or the reworked surface.”) Regarding claim 7, Samain as modified by Takeguchi teaches, positioning a mobile image- capturing device with respect to the area of the lips using a position sensor. (Samain. ”[0173] During a step 11, a 3D scan of the topography of at least part of the surface of the lips of the user is taken using a 3D scanner 31, for example an Artec 3D “Spider” color scanner.”) Regarding claim 8, Samain as modified by Takeguchi teaches, generating a reworked output 3D surface by stretching the input 2D image, the applicator or the mold used to manufacture it having a shape given at least partially by this reworked surface. (Samain, “0187] It is possible to use image processing to isolate the regions from which to produce the applicator. Thus, FIG. 8 illustrates the captured region after the region outside the outlined outline has been eliminated, this corresponding to an image Im.sub.2 of the application surface 2 of the applicator 1 that is in the process of being produced.”) Claim 9 is directed to a method and its steps are similar in scope and functions of the steps of the claim 1 and therefore claim 9 is rejected with same rationales as specified in eth rejection of claim 1. Claim 14 is directed toa device and its elements are similar in scope and function of the steps of method claim 1 and therefore claim 14 is rejected with same rationales as specified in the rejection of claim 1. Response to Arguments Applicant’s arguments, see remarks, page 6 filed 3/4/2026, with respect to objections have been fully considered and are persuasive. The objection has been withdrawn. Applicant’s arguments, see remarks, page 6 filed 3/4/2026, with respect to rejections of claims under 35 USC 112(b) have been fully considered and are persuasive. The rejections have been withdrawn. Applicant’s arguments, see remarks, page 6filed 3/4/2026, with respect to rejections of claims 10 and 13 under 35 USC 102 have been fully considered and are not persuasive. The rejections have been maintained. Applicant argues, see remarks page 7, “ n the rejection of Claim 10, the Office Action cites paragraph [0044] of Samain as allegedly disclosing capturing an input 2D image provided with a first dimensional reference frame. The Office Action then cites paragraphs [0047] and [0050] of Samain as allegedly disclosing generating an output 3D image of the area of the lips from the input 2D image and from the input 3D image, a contour of the lips. However, these sections do not mention any 2D images. Indeed, Samain is silent regarding any 2D images. Instead, Samain only mentions capture of 3D images.” Examiner replies, Samain[0044] teaches capturing both 3D and 2D image., See Samain “[0044] In order to perform the 3D scan it is possible to use any 3D scanner capable of capturing the volume and the dimensions of the zone concerned. For preference, use is made of a 3D scanner able also to capture the color and appearance of the zone concerned, so as to acquire one or more images providing information as to the location of the composition.” The first line of {0044] discloses performing 3D scan and captures volume. Therefore 3D scan captures 3D image (volume). The same 3D scanner also captures several images capturing color and appearance . These images are 2D images. Therefore applicant’s argument is not persuasive. Applicant’s arguments, see remarks, page 6filed 3/4/2026, with respect to rejections of claim 1 under 35 USC 103 have been fully considered and are not persuasive. The rejections have been maintained. In response to applicant’s argument that the primary reference doesn’t have 2D image. However the argument is not correct. Samain[0044] teaches capturing both 3D and 2D image., See Samain “[0044] In order to perform the 3D scan it is possible to use any 3D scanner capable of capturing the volume and the dimensions of the zone concerned. For preference, use is made of a 3D scanner able also to capture the color and appearance of the zone concerned, so as to acquire one or more images providing information as to the location of the composition.” The first line of {0044] discloses performing 3D scan and captures volume. Therefore 3D scan captures 3D image (volume). The same 3D scanner also captures several images capturing color and appearance . These images are 2D images. Therefore applicant’s argument is not persuasive. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tapas Mazumder whose telephone number is (571)270-7466. The examiner can normally be reached M-F 8:00 AM-5:00 PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAPAS MAZUMDER/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

May 31, 2024
Application Filed
Nov 29, 2025
Non-Final Rejection — §102, §103
Mar 04, 2026
Response Filed
Mar 19, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579763
SIGNALING POSE INFORMATION TO A SPLIT RENDERING SERVER FOR AUGMENTED REALITY COMMUNICATION SESSIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12571648
GUIDANCE FOR COLLABORATIVE MAP BUILDING AND UPDATING
2y 5m to grant Granted Mar 10, 2026
Patent 12573157
SEE-THROUGH DISPLAY METHOD AND SEE-THROUGH DISPLAY SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12561916
INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12555328
VIDEO PLAYING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+16.2%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 418 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month