Prosecution Insights
Last updated: April 19, 2026
Application No. 18/770,507

MICROROBOT PLATFORM AND USER INTERFACE FOR EYELASH ENHANCEMENT

Final Rejection §103
Filed
Jul 11, 2024
Examiner
LY, MOYA PHUNG
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
L'Oréal
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
3 granted / 5 resolved
+8.0% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
18 currently pending
Career history
23
Total Applications
across all art units

Statute-Specific Performance

§101
12.5%
-27.5% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
27.3%
-12.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed on 02/18/2026 has been entered. Claims 1-3, 6, 8-15, and 19-20 are pending in the application. In response to Applicant's amendments, Examiner withdraws the previous objections to the Drawings and the claims; and withdraws the previous 112(b) rejections. Response to Arguments Applicant’s arguments filed 02/18/2026 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/18/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 6 and 13 are objected to because of the following informalities: In claim 6, there should be a conjunction between “a skin condition attribute” and “a hair attribute” (for example, “a skin condition attribute, or a hair attribute”). In claim 13, “a subject” should read “the subject” because a subject is previously recited in claim 12. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 6, 8-15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Harding et al. (US 20190269223 A1; hereafter “Harding”) in view of Pelrine (US 5099216 A), Calina (US 9456646 B2), Shalah-Abboud and Williams (US 20250214250 A1, filed 03/30/2023; hereafter “Shalah-Abboud”), and Fu et al. (US 20190014884 A1; hereafter “Fu”). Regarding claim 12, Harding discloses A system comprising: circuitry configured to: obtain digital source image data of a subject (See “the eye area is located in eye locating step 741. In some embodiments, this can be achieved automatically by the computer vision system recognizing the shape of a closed human eye” of the subject [0122]. The computer vision system processes images (digital source image data) taken by camera 240 or stereo cameras 277 [0069]. Circuitry: see controller 276 (“one or more of: a microcontroller, microcomputer, microprocessor, field programmable gate array (FPGA), graphics processing unit (GPU), or application specific integrated circuit (ASIC)”) in [0069]. See also [0066], [0071], and Fig. 27.); define an eyelash region of the subject in the digital source image data (The eye area is the eyelash region of the subject. See “the eye area is located in eye locating step 741. In some embodiments, this can be achieved automatically by the computer vision system recognizing the shape of a closed human eye” of the subject [0122]. The computer vision system processes images (digital source image data) taken by camera 240 or stereo cameras 277 [0069]. Circuitry: see controller 276 (“one or more of: a microcontroller, microcomputer, microprocessor, field programmable gate array (FPGA), graphics processing unit (GPU), or application specific integrated circuit (ASIC)”) in [0069]. See also [0071] and Fig. 27.); generate an eyelash map based at least in part on analysis of the defined eyelash region (See “in imaging step 743, the computer vision system images the fan of the eyelashes [or a subset thereof]… Using the data from the computer vision system, the computer will calculate [map] the position of the fan of the eyelashes with respect to the robotic system” [0123]. See also eyelash geometry in [0114] and [0123-0124]. See also [0071], [0125], [0132], [0136-0137], and Fig. 28.); and generate …robot control instructions based at least in part on the eyelash map (See “the computer will calculate the position of the fan of the eyelashes with respect to the robotic system. Then the computer will choose a region within the [mapped] fan of the eyelashes for isolation and extension placement in choose region step 745. Then, the computer will instruct [generate and transmit control instructions] the robotic system to attempt an isolation of an eyelash in attempt isolation step 746” [0123]. See “Controller 276 includes the software used to coordinate the motion of robotic mechanism 219 with data received from the computer vision system and then to carry out the motions described during eyelash isolation and extension placement” [0069]. See also [0071] and [0124-0125].), wherein the… control instructions are configured to cause one or more …robots to apply one or more artificial lashes to the subject based on the eyelash map (See “If the eyelash is determined to have been successfully isolated, the computer will instruct the robot to perform the placement routine in placement step 751” [0124]. Because a region of the mapped fan of eyelashes was chosen for isolation, the artificial lash placement dependent on isolating a natural lash is based on the eyelash map. Details of how the robot applies extensions (artificial lashes) to the subject’s natural eyelashes are given in [0071]. See also “After the isolation robot is successful, the computer can return along path ‘A’ to step 746 or path ‘B’ to imaging step 743… it can be desirable to attempt isolation a number of times in one place along the fan of the eyelashes, choosing path ‘A’ repeatedly, but then choose path ‘B’ if the isolation is, or becomes, unsuccessful” [0124]. See also [0125-0126].); However, Harding does not explicitly teach “generate microrobot control instructions based at least in part on the eyelash map, wherein the microrobot control instructions are configured to cause one or more microrobots to apply one or more artificial lashes to the subject based on the eyelash map; perform attribute analysis on the digital source image data to identify one or more attributes of the subject, wherein the eyelash recommendation is further based on the one or more attributes; provide the eyelash map to an eyelash recommendation engine, wherein the eyelash recommendation engine comprises an artificial neural network; and by the eyelash recommendation engine, generate an eyelash recommendation based at least in part on the eyelash map and the one or more attributes of the subject, wherein the eyelash recommendation comprises one or more positions on an eyelid or existing eyelash of the subject for the one or more artificial lashes to be applied by the one or more microrobots to address a condition or achieve a desired look, wherein the microrobot control instructions are further based on the eyelash recommendation.” Pelrine, in the same field of endeavor (microrobot systems), teaches circuitry configured to… generate microrobot control instructions… (See “A computer program can determine and record the exact combinations necessary to make a particular movement in a particular direction… As a result, control, by means of a computer, of current to electromagnets 12 enables the adjustment of the magnetic force fields necessary to move manipulator [microrobot] 14, or any like magnet, in a full six degrees of motion” [col. 8, lines 33-45].). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the eyelash application system of Harding to generate microrobot control instructions as taught by Pelrine. One of ordinary skill in the art would have been motivated to make this modification for the benefit of using a microrobot with “an almost infinite variety of motions… available” (Pelrine, col. 8, lines 33-45). However, Harding/Pelrine does not explicitly teach “perform attribute analysis on the digital source image data to identify one or more attributes of the subject, wherein the eyelash recommendation is further based on the one or more attributes; provide the eyelash map to an eyelash recommendation engine, wherein the eyelash recommendation engine comprises an artificial neural network; and by the eyelash recommendation engine, generate an eyelash recommendation based at least in part on the eyelash map and the one or more attributes of the subject, wherein the eyelash recommendation comprises one or more positions on an eyelid or existing eyelash of the subject for the one or more artificial lashes to be applied by the one or more microrobots to address a condition or achieve a desired look, wherein the microrobot control instructions are further based on the eyelash recommendation.” Calina, in the same field of endeavor (application of artificial eyelashes), teaches wherein the eyelash recommendation comprises a position on an eyelid or existing eyelash of the subject for an artificial lash to be applied… (See “The first type of eyelash extension may be coupled to the natural eyelash at a first height [position] and in a first direction, wherein the first direction is positioned from the natural lash towards the axis of symmetry of the user's face” [col. 5, lines 21-27]. Calina recommends various ways of correcting the direction of natural lashes using eyelash extensions; for example, “correcting a natural lash growing about 2-6 mm away from and bent from the rest of natural lashes going away from the nose. The synthetic eyelash extension is placed on a left side of natural lash slightly toward nose about 1 mm from lash line” [col. 7, lines 45-53]. See also col. 7, line 54 to col. 8, line 50.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the eyelash application system of Harding/Pelrine to apply artificial eyelashes to a specific position on an existing eyelash as taught by Calina. One of ordinary skill in the art would have been motivated to make this modification for the benefit of correcting the appearance of natural lashes (Calina; col. 7, line 45 to col. 8, line 50). However, Harding/Pelrine/Calina does not explicitly teach “perform attribute analysis on the digital source image data to identify one or more attributes of the subject, wherein the eyelash recommendation is further based on the one or more attributes; provide the eyelash map to an eyelash recommendation engine, wherein the eyelash recommendation engine comprises an artificial neural network; and by the eyelash recommendation engine, generate an eyelash recommendation based at least in part on the eyelash map and the one or more attributes of the subject, …the one or more artificial lashes to be applied by the one or more microrobots to address a condition or achieve a desired look, wherein the microrobot control instructions are further based on the eyelash recommendation.” Shalah-Abboud, in the same field of endeavor (robotic cosmetics application systems), teaches circuitry configured to… perform attribute analysis on the digital source image data to identify one or more attributes of the subject (Circuitry: processor 602, which runs Makeup Application Planner 620 to obtain the scan of the subject’s face and runs Desired look Generator 625 to obtain the desired look; both Makeup Application Planner 620 and Desired look Generator 625 are stored in memory 607 [0168-0170]. Digital source image data: visual input; see “the visual input may be obtained from visual sensors, such as a camera” [0123]. Perform attribute analysis: “the visual input may be analyzed to determine properties of the 3D surface of the [subject’s] face… The properties [attributes] may comprise a structure of the surface color of the surface (e.g., user's skin color, background color, or the like), texture (e.g. dry skin, pimples, pores, wrinkles, or the like), or the like” [0124]. See also [0046].), wherein the eyelash recommendation is further based on the one or more attributes (Eyelash recommendation: desired look. See “On Step 330, a desired look may be determined based on the user input and on the face of the subject… the desired look may be dynamically generated… based on properties [attributes] of the face” [0126].). Shalah-Abboud additionally teaches generating a map (See “On Step 310, a 3D surface of a face of the subject may be obtained… based on a visual input capturing the subject” [0123]. See “the visual input may be analyzed to determine locations or positioning of components of the surface, such as coordinates of points of interest, boundaries, exact locations of facial features, reference points, or the like. As an example, coordinates of the eyes, nose, eyebrows, lips, or the like, may be identified” [0124].) and circuitry configured to… provide the… map to a… recommendation engine… (Circuitry: processor 602, which runs Desired look Generator 625 (630 in Fig. 6) stored in memory 607 [0168-0170]. See “Makeup Application Planner 620 may be further configured to obtain the desired look [recommendation] from Desired look Generator 625 [recommendation engine]… Desired look Generator 625 may be configured to determine the desired look… based on adaptation of the user input to the surface of the face of the subject, or the like” [0170]. See “On Step 330, a desired look may be determined based on the user input and on the face of the subject… the desired look may be dynamically generated based on the face of the subject (e.g., a photo thereof, 3D model thereof, or the like)” [0126]. See also Fig. 3A.); and by the… recommendation engine, generate a… recommendation based at least in part on the… map and the one or more attributes of the subject (See “On Step 330, a desired look [recommendation] may be determined based on the user input and on the face of the subject… the desired look may be dynamically generated based on the face of the subject (e.g., a photo thereof, 3D model thereof, or the like), based on properties of the face, or the like” [0126]. See “Makeup Application Planner 620 may be further configured to obtain the desired look from Desired look Generator 625… Desired look Generator 625 may be configured to determine the desired look… based on adaptation of the user input to the surface of the face of the subject, or the like” [0170]. See also Fig. 3A.), wherein the… recommendation comprises one or more positions on an eyelid… of the subject for [makeup] to be applied by the one or more …robots to address a condition or achieve a desired look (The user input used to generate the desired look (recommendation) includes a position for makeup to be applied as displayed in a photo or GUI [0125]. Additionally, “the makeup application plan may comprise an optimized set of trajectories [comprising positions] in 3D space of which the makeup applicator is configured to follow in order to achieve the desired look. Each trajectory may represent a path of the makeup applicator [robot] while applying the makeup material on the subject” [0127]. The makeup applied includes eyeshadow [0129], which is applied to eyelids.); wherein the …robot control instructions are further based on the eyelash recommendation (See Fig. 3A: In step 340, “generate the makeup application plan based on the desired look [recommendation] and the 3D surface” includes “generate instructions to cause the makeup applicator [robot] to apply the [makeup] on the subject” (substep 346). See also [0127-0133].). Harding discloses allowing a user to designate regions to use different sized eyelash extensions [0067] to be applied by robotic mechanism 219 [0071]. In combination, the Desired look Generator of Shalah-Abboud can therefore generate an eyelash recommendation “based on previous selections” from the user (Shalah-Abboud, [0126]), “from a catalogue” (Shalah-Abboud, [0125]), or from Calina’s recommendations (Calina, col. 7, line 54 to col. 8, line 50), based on Harding’s eyelash fan map with a calculated position and orientation of the eyelash fan (see Harding, [0114-0115], [0123-0124], and [0136-0137]). Thus, the combination of Harding, Pelrine, Calina, and Shalah-Abboud as a whole teaches “perform attribute analysis on the digital source image data to identify one or more attributes of the subject, wherein the eyelash recommendation is further based on the one or more attributes; provide the eyelash map to an eyelash recommendation engine… and by the eyelash recommendation engine, generate an eyelash recommendation based at least in part on the eyelash map and the one or more attributes of the subject, wherein the eyelash recommendation comprises one or more positions on an eyelid or existing eyelash of the subject for the one or more artificial lashes to be applied by the one or more microrobots to address a condition or achieve a desired look, wherein the microrobot control instructions are further based on the eyelash recommendation.” Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the eyelash application system of Harding/Pelrine/Calina to generate an eyelash recommendation as taught by Shalah-Abboud. One of ordinary skill in the art would have been motivated to make this modification for the benefit of “adaptation of [a desired] makeup design [to] the face of the subject” (Shalah-Abboud, [0126]). Shalah-Abboud further discloses the image analysis used to determine attributes and generate a map of the subject’s face “may be performed using… machine learning” [0124]. However, Harding/Pelrine/Calina/Shalah-Abboud does not explicitly teach “wherein the eyelash recommendation engine comprises an artificial neural network”. Fu, in the same field of endeavor (makeup recommendations from image data), teaches wherein the eyelash recommendation engine comprises an artificial neural network (See “a makeup recommendation system, comprising: at least one trained neural network model for providing varying makeup styles” [0030]. See also the Abstract, [0031], and [0095].), Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the eyelash application system of Harding/Pelrine/Calina/Shalah-Abboud to include an artificial neural network in the eyelash recommendation engine as taught by Fu. One of ordinary skill in the art would have been motivated to make this modification for the benefit of “generating personalized step-by-step makeup instructions… based on data in the at least one trained neural network annotated by the annotation system and/or recommending products from the makeup product database” comprised in the makeup recommendation system (Fu, [0030]). Regarding claims 1 and 20, these claim limitations are significantly similar to those of claim 12; and, thus, are rejected on the same grounds. Regarding claim 13, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 12 as addressed above, and Harding additionally discloses one or more cameras configured to capture the digital source image data of a subject (Harding discloses an embodiment with a single camera 240 (see Fig. 6) and an embodiment with two stereo cameras 277 (see Fig. 7). See also [0066], [0069], and [0141].). Regarding claim 14, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 12 as addressed above, and Pelrine additionally discloses the one or more microrobots (See microrobot 14 shown in Fig. 1. See also col. 7, lines 6-14. Systems with multiple microrobots are also disclosed [col. 4, line 60 to col. 5, line 2; col. 10, lines 12-32].). Regarding claim 15, Harding/Pelrine/Calina/Shalah-Abboud/Fu disclose the limitations of claim 12 as addressed above, and Harding additionally discloses “recognizing the shape of a closed human eye” (a facial landmark) by the computer vision system “from the digital source image data” (see [0069] and [0122].). Fu additionally teaches wherein the circuitry is further configured to extract facial landmarks from the digital source image data (Circuitry: this and the following steps may be implemented in software running on a processor P; see [0115], [0267-0273], and Fig. 38. See “Upon detection of the face of the image, in Step 1020, the facial landmarks are located using the input image” (digital source image data) [0114]. See also [0041], [0175], [0183-0184], Figs. 3, 30-31, and 35.), identify the location and shape of the eyelash region based on the facial landmarks (See “In Step 1030, e.g., a landmark detection algorithm may be utilized to locate the fiducial points of the landmarks, through which one can then extract the mouth region and eye region [eyelash region] images” [0115]. The shape is known to extract the eye region. The eye region is used for applying eyelash effects; see [0183]. See also [0041], [0175], [0184], Figs. 3, 30-31, and 35.), and apply an image mask corresponding to the eyelash region to the digital source image data (Image mask: extracted eye region. See “In Step 1030, e.g., a landmark detection algorithm may be utilized to locate the fiducial points of the landmarks, through which one can then extract the mouth region and eye region [eyelash region] images” [0115]. See the image mask in Figs. 4A-5C under various processing. See also [0041], [0116], [0175], [0183-0184], Figs. 3, 30-31, and 35.). Regarding claim 2, these claim limitations are significantly similar to those of claim 15; and, thus, are rejected on the same grounds. Regarding claim 3, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 2 as addressed above, and Fu additionally discloses wherein the facial landmarks include eye points or eye contours or eyebrow points or eyebrow contours (See “Landmarks can be preset and selected such as top of the chin, outside edge of each eye [eye contours], inner edge of each eyebrow [eyebrow contours], and the like. Such landmarks are common to all faces and so are detected and evaluated using precise localization of their fiducial points (e.g. nose tip, mouth and eye corners [eye points]) in color images of face foregrounds” [0114]. See also [0124], [0183-0184], and Fig. 35.). Regarding claim 6, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 1 as addressed above, and Shalah-Abboud additionally discloses wherein the one or more attributes include one or more of a face shape attribute, an age attribute, an eye attribute, an eyebrow attribute, a skin tone attribute, a skin texture attribute, a skin condition attribute, a hair attribute (See “The properties [attributes] may comprise a structure of the surface color of the surface (e.g., user's skin color [skin tone attribute], background color, or the like), texture (e.g. dry skin, pimples, pores, wrinkles, or the like) [skin texture attribute and/or skin condition attribute], or the like. Additionally or alternatively, the visual input may be analyzed to determine locations or positioning of components of the surface… coordinates of the eyes [eye attribute], nose, eyebrows [eyebrow attribute], lips, or the like, may be identified” [0124].). Regarding claim 8, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 1 as addressed above, and Shalah-Abboud additionally discloses providing the digital source image data and the eyelash recommendation to an image generation module (Image generation module: Simulation Module 650; see “Simulation Module 650 may be configured to simulate implementation of the instructions of the makeup application plan generated by Makeup Application Planner 620 on a 3D model of the subject generated by 3D Model Generator 640” [0178]. From Fig. 3A, the 3D surface of the subject’s face (digital source image data; see “the 3D surface of the face of the subject may be determined based on a visual input capturing the subject… the visual input may be obtained from visual sensors, such as a camera” [0123]) is used to generate a desired look (eyelash recommendation), and “the makeup application plan may be generated based on the 3D surface of the face and the desired look” [0127]. Therefore, the digital source image data and eyelash recommendation are provided to an image generation module through the makeup application plan.); and generating a modified image or 3D model based on the digital source image data and the eyelash recommendation (See Fig. 2B; “the makeup application plan may be obtained [in step 210b] using one or more of the methods described in FIGS. 3A-3B, or portions thereof” [0102]. See “On Step 262, an intermediate simulated outcome [modified image or model] depicting the subject wearing makeup in accordance with a partial application of the makeup application plan may be generated… the intermediate simulated outcome may be generated based on a 3D model or a digital image of the subject's face” [0115-0116]. See also [0113-0114].). Regarding claim 9, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 8 as addressed above, and Shalah-Abboud additionally discloses displaying the modified image or 3D model in an eyelash enhancement user interface (See “On Step 264, the intermediate simulated outcome [modified image or model] may be displayed to the user, such as on a computer screen, a mobile device, or any other display device accessible to the user or the subject” [0117]. See also Fig. 2B.). Regarding claim 10, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 9 as addressed above, and Shalah-Abboud additionally discloses wherein the eyelash enhancement user interface further includes virtual try-on functionality that allows modification of the eyelash recommendation via user interaction with the modified image or 3D model (See “On Step 272, …the user may be enabled to review the intermediate simulated outcome [modified image or 3D model] and make any necessary adjustments [user interaction] to the makeup application plan before continuing with the final makeup application. On Step 274, the makeup application plan may be updated based on the user review” [0119-0120]. In Fig. 2B, the process returns to step 264 (displaying the intermediate simulated outcome) after step 274. See also [0117].). Regarding claim 11, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 1 as addressed above, and Harding additionally discloses receiving user input from an eyelash enhancement user interface (See “The control system can have a user interface… which would allow the user to designate [input] which region of the eye would use [eyelash extension] size A, which region would use size B, which region would use a mix of A and B, etc.” [0067]. See also [0070-0071] and [0125].), wherein the …robot control instructions are further based on the user input (See “Since robotic mechanism 219 is using a vision or other system to locate the extensions, the exact placement of the tray in the loading zone can be flexible” [0067]. This means that when the robotic mechanism 219 selects a chosen extension size for application, the robot control instructions are based on the user input. See also [0069-0071] and [0125].). Pelrine discloses “microrobot control instructions” (see rejection of claim 12 above). Thus, the combination as a whole teaches the claim. Regarding claim 19, Harding/Pelrine/Calina/Shalah-Abboud/Fu discloses the limitations of claim 12 as addressed above, and Shalah-Abboud additionally discloses wherein the circuitry is further configured to: provide the digital source image data and the eyelash recommendation to an image generation module (Image generation module: Simulation Module 650; see “Simulation Module 650 may be configured to simulate implementation of the instructions of the makeup application plan generated by Makeup Application Planner 620 on a 3D model of the subject generated by 3D Model Generator 640” [0178]. The Simulation Module 650 and Makeup Application Planner 620 are stored in memory 607 and run by processor (circuitry) 602 (see Fig. 6 and [0168-0169]). From Fig. 3A, the 3D surface of the subject’s face (digital source image data; see “the 3D surface of the face of the subject may be determined based on a visual input capturing the subject… the visual input may be obtained from visual sensors, such as a camera” [0123]) is used to generate a desired look (eyelash recommendation), and “the makeup application plan may be generated based on the 3D surface of the face and the desired look” [0127]. Therefore, the digital source image data and eyelash recommendation are provided to an image generation module through the makeup application plan.); generate a modified image or 3D model based on the digital source image data and the eyelash recommendation (The Simulation Module 650 is stored in memory 607 and run by processor (circuitry) 602 (see Fig. 6 and [0168-0169]). See Fig. 2B; “the makeup application plan may be obtained [in step 210b] using one or more of the methods described in FIGS. 3A-3B, or portions thereof” [0102]. See “On Step 262, an intermediate simulated outcome [modified image or model] depicting the subject wearing makeup in accordance with a partial application of the makeup application plan may be generated… the intermediate simulated outcome may be generated based on a 3D model or a digital image of the subject's face” [0115-0116]. See also [0113-0114].); display the modified image or 3D model in an eyelash enhancement user interface (See “On Step 264, the intermediate simulated outcome [modified image or model] may be displayed to the user, such as on [circuitry/user interface:] a computer screen, a mobile device, or any other display device accessible to the user or the subject” [0117]. See also Fig. 2B, Fig. 6, [0166], [0168-0169], and [0178].); and receive user input from the eyelash enhancement user interface (Circuitry: “I/O Module 605 may be utilized to provide an output to and receive input from a user [such as] obtaining user input indicative of the desired look, displaying makeup results to the user” [0166]. See “On Step 272, …the user may be enabled to review the intermediate simulated outcome [modified image or 3D model] and make any necessary adjustments [user interaction] to the makeup application plan before continuing with the final makeup application. On Step 274, the makeup application plan may be updated based on the user review” [0119-0120]. In Fig. 2B, the process returns to step 264 (displaying the intermediate simulated outcome) after step 274. See also [0117].), wherein the …robot control instructions are further based on the user input (See “On Step 290b, the dynamically updated makeup application plan or portion thereof may be implemented to achieve the desired look taking into account the movement of the subject and an implementation of the portion of the makeup application plan” after user review/input in step 272 (see [0121] and Fig. 2B). The makeup application plan are robot control instructions: “the makeup application plan may comprise instructions to an automatic makeup applicator [robot] for a process that provides the desired look” [0127]. See also [0108].). Pelrine discloses “microrobot control instructions” (see rejection of claim 12 above). Thus, the combination as a whole teaches the claim. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Moya Ly whose telephone number is (571)272-5832. The examiner can normally be reached Monday-Friday 10:00 am-6:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOYA LY/Examiner, Art Unit 3658 /Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Jul 11, 2024
Application Filed
Nov 13, 2025
Non-Final Rejection — §103
Feb 18, 2026
Response Filed
Mar 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12502770
RESILIENT MULTI-ROBOT SYSTEM WITH SOCIAL LEARNING FOR SMART FACTORIES
2y 5m to grant Granted Dec 23, 2025
Patent 12479108
DEVICE AND CONTROL METHOD USING MACHINE LEARNING FOR A ROBOT TO PERFORM AN INSERTION TASK
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+66.7%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month