Prosecution Insights
Last updated: April 19, 2026
Application No. 18/444,382

Smart Mirror with Makeup Look Superimposition and Guided Tracing

Non-Final OA §103§DP
Filed
Feb 16, 2024
Examiner
HOANG, HAN DINH
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Elc Management LLC
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
120 granted / 162 resolved
+12.1% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
187
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.7%
+25.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 162 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-13 and 37-39 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6, 11-18 and 42-43 of U.S. Patent Application No. 18624361. Although the claims at issue are not identical, they are not patentably distinct from each other because Instant Application App# 18624361 Claim 1 Claim 1 An intelligent mirror device, comprising: A head’s up display (HUD) device for cosmetic application, comprising: a mirror having an integrated display component; as displayed by the user interface or as shown in a mirror. a user interface; a user interface; one or more sensors configured to capture real-time data associated with a face of a user; one or more processors; one or more sensors configured to capture real-time data associated with a face of a user; one or more processors; and one or more memories storing non-transitory computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to: and one or more non-transitory memories storing computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to: receive an indication of a makeup look selected by the user; receive an indication of a makeup look selected by the user; analyze the real-time data associated with the face of the user in order to generate a three-dimensional map associated with the face of the user; analyze the real-time data associated with the face of the user in order to generate a three-dimensional map associated with the face of the user; identify one or more facial features of the face of the user on the three-dimensional map associated with the face of the user; identify one or more facial feature of the face of the user on the three-dimensional map associated with the face of the user; and cause the user interface to provide, via the integrated display component, guidance associated with applying one or more cosmetic products to the facial features of the user in order to achieve the makeup look selected by the user, and provide, via the user interface, guidance associated with applying one or more cosmetic products to the one or more facial features of the user in order to achieve the makeup look selected by the user, wherein the guidance is at least partially superimposed upon the face of the user in the mirror. wherein the guidance is at least partially superimposed upon the face of the user, as displayed by the user interface or as shown in a mirror. All limitations of claim 1 in the instant application are anticipated by claim 1 of the 18624361 application. Instant Application App# 18624361 Claim 37 Claim 42 A computer-implemented method for operating an intelligent mirror device, the method comprising: A computer-implemented method for operating a head’s up display (HUD) device for cosmetic application, the method comprising: receiving, by one or more processors, an indication of a makeup look selected by a user; receiving, by one or more processors, an indication of a makeup look selected by a user; analyzing, by the one or more processors, real-time data associated with the face of the user captured by one or more sensors of the intelligent mirror device in order to generate a three-dimensional map associated with the face of the user; analyzing, by the one or more processors, real-time data associated with a face of the user captured by one or more sensors of the HUD device in order to generate a three-dimensional map associated with the face of the user; identifying, by the one or more processors, one or more facial features of the face of the user on the three-dimensional map associated with the face of the user; identifying, by the one or more processors, one or more facial features of the face of the user on the three-dimensional map associated with the face of the user; and causing, by the one or more processors, a user interface of the intelligent mirror device to provide, via an integrated display component of a mirror of the intelligent mirror device, and providing, by the one or more processors, via a user interface of the HUD device, guidance associated with applying one or more cosmetic products to the facial features of the user in order to achieve the makeup look selected by the user, wherein the guidance is at least partially superimposed upon the face of the user in the mirror. guidance associated with applying one or more cosmetic products to the one or more facial features of the user in order to achieve the makeup look selected by the user, wherein the guidance is at least partially superimposed upon the face of the user in the mirror. wherein the guidance is at least partially superimposed upon the face of the user, as displayed by the user interface or as shown in a mirror. All limitations of claim 37 in the instant application are anticipated by claim 42 of the 18624361 application. Instant Application App# 18624361 Claim 38 Claim 43 A non-transitory computer-readable medium storing instructions for operating an intelligent mirror device that, A non-transitory computer-readable medium storing computer-readable instructions for operating a head’s up display (HUD) device for cosmetic application that, when executed by one or more processors, cause the one or more processors to perform a method comprising: when executed by one or more processors, cause the one or more processors to perform a method comprising: receiving an indication of a makeup look selected by a user receiving an indication of a makeup look selected by a user; analyzing real-time data associated with the face of the user captured by one or more sensors of the intelligent mirror device in order to generate a three-dimensional map associated with the face of the user; analyzing real-time data associated with a face of the user captured by one or more sensors of the HUD device in order to generate a three-dimensional map associated with the face of the user identifying one or more facial features of the face of the user on the three-dimensional map associated with the face of the user identifying one or more facial features of the face of the user on the three-dimensional map associated with the face of the user; causing a user interface of the intelligent mirror device to provide, via an integrated display component of a mirror of the intelligent mirror device and providing via a user interface of the HUD device, provide, via an integrated display component of a mirror of the intelligent mirror device, guidance associated with applying one or more cosmetic products to the facial features of the user in order to achieve the makeup look selected by the user guidance associated with applying one or more cosmetic products to the one or more facial features of the user in order to achieve the makeup look selected by the user, wherein the guidance is at least partially superimposed upon the face of the user in the mirror. wherein the guidance is at least partially superimposed upon the face of the user, as displayed by the user interface or as shown in a mirror. All limitations of claim 38 in the instant application are anticipated by claim 43 of the 18624361 application. Regarding Claim 2 of the instant application, the claim is anticipated by claim 2 of the US Patent Application #18624361 respectively. Regarding Claim 3 of the instant application, the claim is anticipated by claim 3 of the US Patent Application #18624361 respectively. Regarding Claim 4 of the instant application, the claim is anticipated by claim 4 of the US Patent Application #18624361 respectively. Regarding Claim 5 of the instant application, the claim is anticipated by claim 6 of the US Patent Application #18624361 respectively. Regarding Claim 6 of the instant application, the claim is anticipated by claim 11 of the US Patent Application #18624361 respectively. Regarding Claim 7 of the instant application, the claim is anticipated by claim 12 of the US Patent Application #18624361 respectively. Regarding Claim 8 of the instant application, the claim is anticipated by claim 13 of the US Patent Application #18624361 respectively. Regarding Claim 9 of the instant application, the claim is anticipated by claim 14 of the US Patent Application #18624361 respectively. Regarding Claim 10 of the instant application, the claim is anticipated by claim 15 of the US Patent Application #18624361 respectively. Regarding Claim 11 of the instant application, the claim is anticipated by claim 16 of the US Patent Application #18624361 respectively. Regarding Claim 12 of the instant application, the claim is anticipated by claim 17 of the US Patent Application #18624361 respectively. Regarding Claim 13 of the instant application, the claim is anticipated by claim 18 of the US Patent Application #18624361 respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 5-8, 14-15, 17 and 37-38 are rejected under 35 U.S.C. 103 as being unpatentable over Yang US PG-Pub(US 20220211163 A1) in view of Venkataraman US PG-Pub(US 20190164341 A1). Regarding Claim 1, Yang teaches an intelligent mirror device(¶[0148], As shown in FIG. 9, the make-up assistance system 60 comprises a smart mirror 61), comprising: a mirror having an integrated display component(¶[0042], “The smart mirror and such terminal devices each may be equipped with a display for displaying images”, the mirror may be equipped with a display); a user interface (¶[0075], “an interface presented by the smart mirror or the smart terminal to the user, the user may select a modification operation to be performed on the make-up plan”, ¶[0075] discloses a user interface)one or more sensors configured to capture real-time data associated with a face of a user (Fig. 1, S101 shows a facial image of the user is acquired and in ¶[0057] it is disclosed “step S101 may be performed in real time,”); one or more processors and one or more memories storing non-transitory computer-readable instructions that, when executed by the one or more processors(¶[0146], As shown in FIG. 8, the make-up assistance apparatus 50 comprises a memory 51 and a processor 52, the memory 51 having stored therein instructions executable by the processor 52, wherein the instructions, when executed by the processor 52, cause the processor 52 to perform the make-up assistance method), cause the one or more processors to: receive an indication of a makeup look selected by the user(¶[0046], “In step S102, a make-up plan selected by the user is acquired”, discloses the user selects a make-up plan for a desired look.); analyze the real-time data associated with the face of the user (¶[0056], “select a make-up plan needed by the user to perform make-up. In the make-up process, the smart mirror may update the facial image of the user during the make-up periodically or in real time, compare the facial image with a makeup effect image in the make-up plan selected by the user”, ¶[0056] discloses obtaining real-time facial images from the user and comparing them with a makeup effect image in real time.) and cause the user interface to provide, via the integrated display component, guidance associated with applying one or more cosmetic products to the facial features of the user in order to achieve the makeup look selected by the user (¶[0071] “In step S208, makeup modification prompt information is generated for a region where the difference is greater than the threshold. [0072] In this step, when the generated makeup modification prompt information is presented to the user, the user may also be prompted in various manners, for example, the make-up region to be modified may be highlighted in the makeup effect image or the facial image to visually prompt the user, and a position of a make-up region where the image difference is greater than the threshold may be broadcast by voice to acoustically prompt the user.”, ¶[0071]-¶[0072] disclose prompting a user visually through the user interface that there are problems with the makeup generation and to adjust the make up to fit the desired look.), wherein the guidance is at least partially superimposed upon the face of the user in the mirror. (¶[0081], “For example, a makeup effect image may be an image obtained by applying one or more makeup effect such as smoky makeup, evening makeup, light makeup, and the like to the facial image of the user. A makeup effect image may be selected by a user from a plurality of makeup effect images. The makeup effect image selected by the user may be used as a reference in make-up process.”, ¶[0081] discloses a generated makeup effect image is an image in which simulated makeup effects are superimposed/applied to a user’s face in which a user can select the image for the desired look.) Yang does not explicitly teach generate a three-dimensional map associated with the face of the user; identify one or more facial features of the face of the user on the three-dimensional map associated with the face of the user; Venkataraman teaches generate a three-dimensional map associated with the face of the user (¶[0045], “obtaining (610) image data including images in which a face is visible captured from different viewpoints. In many embodiments, image data is obtained in a similar manner as described above. A depth map can be generated (620) from the obtained image data”, ¶[0045] discloses obtaining face images and generating a depth/3d map from the obtained images.); identify one or more facial features of the face of the user on the three-dimensional map associated with the face of the user;([0046], “Key features can be identified (630) in the obtained image data. In many embodiments, known facial recognition techniques can be applied to the image data to identify key features, such as, but not limited to, eyebrows, eyes, the nose, the mouth, the chin, the ears, or any other key features as appropriate to the requirements of a given application. In many embodiments, a user can input which key features to detect.”, ¶[0046] discloses identifying key facial features of the user in the obtained image data and depth map.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang with Venkataraman in order to create a 3D map of the face of the user and identify facial features on the map. One skilled in the art would have been motivated to modify Yang in this manner in order to generate a 3D model of the face using the depth of the key feature points. (Venkataraman, Abstract) Regarding Claim 2, the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, where Yang further teaches wherein the one or more sensors include one or more of a camera or a depth sensor. (¶[0042],” The smart mirror and such terminal devices each may be equipped with a display for displaying images and an imaging device for capturing images of users, such as a camera.”, ¶[0042] discloses the smart mirror is equipped with a camera) Regarding Claim 5, the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, where Yang further teaches wherein the user interface includes an audio component, and wherein providing guidance associated with applying the one or more cosmetic products to the facial features of the user includes providing audio guidance via the audio component. (¶[0145], “a difference broadcast module configured to prompt, by voice, a position of the make-up region where the image difference is greater than the threshold.”, ¶[0145] discloses issuing an audio prompt when the make up doesn’t match the desired make up plan.) Regarding Claim 6, the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, where Yang further teaches further comprising a communication interface configured to communicate with a mobile device, external to the intelligent mirror device. (¶[0148], “The smart mirror 61 is communicatively coupled to the makeup matching server 62 through, for example, but not limited to, a wired connection and a wireless connection, to realize information transfer between the make-up assistance apparatus 611 and the makeup matching server 62”, ¶[0148] discloses the smart mirror is coupled to a server when transmitting data of the facial images.) Regarding Claim 7, the combination of Yang and Venkataraman teach the intelligent mirror device of claim 6, where Yang further teaches wherein the communication interface is a wired communication interface. (¶[0148], “The smart mirror 61 is communicatively coupled to the makeup matching server 62 through, for example, but not limited to, a wired connection and a wireless connection,” disclosed a wired communication link with the server.) Regarding Claim 8, the combination of Yang and Venkataraman teach the intelligent mirror device of claim 6, where Yang further teaches wherein the communication interface is a wireless communication interface. (¶[0148], “The smart mirror 61 is communicatively coupled to the makeup matching server 62 through, for example, but not limited to, a wired connection and a wireless connection”, discloses a wireless communication link with server. ) Regarding Claim 14, the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, where Yang further teaches wherein the instructions, when executed by the one or more processors, further cause the one or more processors to identify one or more blemishes of the face of the user on the three-dimensional map associated with the face of the user, ([0084] “In step S304, a skin detection is performed on the make-up region by the makeup matching server, so as to obtain skin information of the facial image.[0085] For example, the skin detection includes a skin color detection and skin spot detection.[0086] The skin color detection is an analysis and calculation process of human skin color pixels, which may accurately identify skin areas.”, ¶[0084] discloses determining blemishes/skin spots of the facial image acquired by the user.)and wherein providing, via the user interface, guidance associated with applying one or more cosmetic products to the facial features of the user is further based on the identified one or more blemishes of the face of the user. ([0088] “In step S305, at least one makeup plan matched with the make-up region is determining by the makeup matching server according to the skin information and the makeup effect image.[0089] For example, the makeup matching server may perform matching based on the makeup area of the face image according to the skin information and the makeup effect image obtained in above steps, so that the makeup matching server may determine at least one makeup plan conforming to the user requirement for the user to select.”, ¶[0088]-¶[0089] discloses determining a makeup plan for the user based on the skin information acquired to best achieve the desired makeup effect.) Regarding Claim 15, the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, where Yang further teaches wherein providing, via the user interface, guidance associated with applying one or more cosmetic products to the facial features of the user is further based on a skin type associated with the user, a skin health condition associated with the user, a hydration level of the skin of the user, a skin tone associated with the user, current temperature conditions, current humidity conditions, current precipitation conditions, current lighting conditions, a current time of day, or one or more properties associated with the one or more cosmetic products. (¶[0089], “the makeup plan may include cosmetics. For example, according to the difference between the skin color information obtained from the facial image and a skin color information of the makeup effect image, a base cosmetic having a color complementary to the skin color information obtained from the facial image is selected from a plurality of base cosmetics each having a color. If the user selects the base cosmetic, the difference of the skin color between the facial image and the makeup effect image may be reduced. For example, the skin color information includes at least one of a grayscale of a pixel and a color of a pixel, and the base cosmetic includes at least one of a foundation and a blush.”, The claim language recites a list of conditions when applying the cosmetic product but using the word or in the claim which under broadest reasonable interpretation the examiner would only need to map the limitation to one of the conditions and the examiner is mapping the application of the cosmetic product based on the skin tone of the user which is disclosed in ¶[0089] when constructing the makeup plan a process of determining the skin color of the user from that determination picking a cosmetic product to apply based on the skin color. ) Regarding Claim 17, the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, where Yang further teaches wherein the non-transitory computer-readable instructions, when executed by the one or more processors, further cause the one or more processors to analyze one or more of: the real-time data associated with the face of the user captured by the one or more sensors, or previously-captured data associated with the face of the user captured by the one or more sensors(¶[0056], “the smart mirror may update the facial image of the user during the make-up periodically or in real time, compare the facial image with a makeup effect image in the make-up plan selected by the user, and provide makeup modification prompt information, to inform the user of deficiencies of the current makeup, so that the user may adjust the current makeup in a targeted manner to ensure that the user's make-up effect is consistent with the makeup effect of the selected make-up plan”, ¶[0056] discloses facial images are processed in real time and compared to a make up effect image to alert the user if there is deficiencies in applying the make up for the desired look.), in order to determine one or more of a skin type or skin health condition associated with the user. ([0122] “In step S506, the makeup matching server performs a skin detection on the make-up region, so as to obtain skin information of the facial image.”, ¶[0122] discloses a server is able to perform skin detection to determine skin information which inherently could be used to determine skin type of the user.) Regarding Claim 37, claim 37 is considered a method claim substantially corresponding to claim 1. Please see the discussion of claim 1 above for a discussion of similar limitations. Furthermore, Yang teaches a computer-implemented method for operating an intelligent mirror device (See ¶[0150] The embodiments of the present disclosure further provide a computer readable storage medium having stored thereon a computer program which, when executed by a computer, causes the computer to perform the make-up assistance method according to any of the embodiments described above, for example, the make-up assistance method described above with reference to FIG. 1 or FIG. 2.)comprising: one or more processors([0025] According to another aspect of the present disclosure, there is provided a make-up assistance apparatus comprising a memory and a processor,) Regarding Claim 38, claim 38 is considered a computer medium claim substantially corresponding to claim 1. Please see the discussion of claim 1 above for a discussion of similar limitations. Furthermore, Yang teaches a non-transitory computer-readable medium storing instructions for operating an intelligent mirror device that(See ¶[0150] discloses a computer readable medium stored in memory), when executed by one or more processors, cause the one or more processors to perform a method comprising (See ¶[0025] a computer processor is performing the method) Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yang US PG-Pub(US 20220211163 A1) in view of Venkataraman US PG-Pub(US 20190164341 A1) in view of Besen et al. US PG-Pub(US 20190208887 A1). Regarding Claim 3, while the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, where Yang teaches wherein the guidance includes a plurality of steps associated with applying one or more cosmetic products to the facial features of the user in order to achieve the makeup look selected by the user¶[0049] discloses allowing a user to select a makeup plan and the plan comes with a description of the make up steps and how to achieve the desired look.), and wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: cause the user interface to provide, via the integrated display component, guidance associated with a first step of the plurality of steps(¶[0056], “select a make-up plan needed by the user to perform make-up. In the make-up process, the smart mirror may update the facial image of the user during the make-up periodically or in real time, compare the facial image with a makeup effect image in the make-up plan selected by the user, and provide makeup modification prompt information, to inform the user of deficiencies of the current makeup, so that the user may adjust the current makeup in a targeted manner to ensure that the user's make-up effect is consistent with the makeup effect of the selected make-up plan”, ¶[0056] discloses once the make up plan is selected the user is provided guidance from the interface when there are deficiencies in the current makeup and a prompt is issued to allow the user to readjust the makeup so it matches the selected plan.); Yang and Venkataraman do not explicitly teach analyze the real-time data associated with the face of the user in order to determine that the first step of the plurality of steps has been completed by the user; and based on determining that the first step of the plurality of steps has been completed by the user, cause the user interface to provide, via the integrated display component, guidance associated with a second step of the plurality of steps Besen teaches analyze the real-time data associated with the face of the user in order to determine that the first step of the plurality of steps has been completed by the user; and based on determining that the first step of the plurality of steps has been completed by the user, cause the user interface to provide, via the integrated display component, guidance associated with a second step of the plurality of steps. (¶[0070], “The client device-based software then begins playing a first video tutorial of the instructional unit and then projecting templated shapes from the client device display 637. These templated shapes, indicated by white outlined objects in FIG. 6B, are customized to the facial features of the user and are presented similarly to a “paint-by-numbers” approach, wherein each shape corresponds to a specific makeup cosmetic. Following completion of each step in an instructional unit by the user, the user indicates to the client device-based software via verbal command, visual command, tactile command, or a combination thereof, that the current step is completed. If additional steps in the instructional unit, or additional instructional units in the coaching module, are required, the coaching session proceeds to initialization of subsequent video tutorials and templated shapes.”, ¶[0070] discloses once the user selects a desired style or look a video tutorial is played with step by step instructions and once the user completes a step they are instructed to prompt the device that they are ready to move onto the next part of the tutorial.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang and Venkataraman with Besen in order to analyze the real time data to see if the user has completed the first step in order to start the second step. One skilled in the art would have been motivated to modify Yang and Venkataraman in this manner in order to provide a makeup application experience combining coaching with convenience in a “paint-by-numbers” approach. (Besen, ¶[0001]) Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Yang US PG-Pub(US 20220211163 A1) in view of Venkataraman US PG-Pub(US 20190164341 A1) in view of Takegawa et al. (JP 2015072697 A). Regarding Claim 4, while the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, they do not explicitly teach wherein the guidance includes tracing lines or arrows associated with applying one or more cosmetic products to the facial features of the user in order to achieve the makeup look selected by the user. Takegawa teaches wherein the guidance includes tracing lines or arrows associated with applying one or more cosmetic products to the facial features of the user in order to achieve the makeup look selected by the user. (Page 3, <Features>, Paragraph 3, “The makeup line for the user is displayed on the display element, and the user can easily finish the makeup easily by tracing the ideal makeup line on his / her face projected on the display unit”, in this section of the prior art, a tracing line is provided to the user to assist them when applying make up.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang and Venkataraman with Takegawa in order to provide tracing lines for applying makeup. One skilled in the art would have been motivated to modify Yang and Venkataraman in this manner in order to allow the user to easily finish the makeup easily by tracing the ideal makeup line on his / her face projected on the display unit. (Takegawa, Page 3, <Features>, Paragraph 3) Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Yang US PG-Pub(US 20220211163 A1) in view of Venkataraman US PG-Pub(US 20190164341 A1) in view of Luo et al. US Patent(US 11580682 B1). Regarding Claim 9, while the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, they do not explicitly teach wherein the user interface includes an augmented reality (AR) component configured to generate and display an AR version of the three-dimensional map associated with the face of the user, superimposed on the face of the user in the mirror, via the integrated display component. Luo teaches wherein the user interface includes an augmented reality (AR) component (Fig. 6 element 214 shows an Augmented Reality(AR) Makeup system) configured to generate and display an AR version of the three-dimensional map associated with the face of the user, superimposed on the face of the user in the mirror, via the integrated display component. (Fig. 8 shows the face of the person with no makeup is displayed on a display screen in element 802 and element 806 shows the AR makeup superimposed on to the face of the user.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang and Venkataraman with Luo in order to generate an AR version of make up being superimposed onto the face of a user. One skilled in the art would have been motivated to modify Yang and Venkataraman in this manner in order to processing images to generate augmented reality (AR) makeup within a messaging system. (Luo, Col 1, Lines 14-15) Regarding Claim 10, the combination of Yang, Venkataraman and Luo teach the intelligent mirror device of claim 9, Luo further teaches wherein the non-transitory computer-readable instructions, when executed by the one or more processors, further cause the one or more processors to generate a three-dimensional preview of the makeup look selected by the user as applied to the three-dimensional map associated with the face of the user(Col 16, Lines 57-67, “The user 708 may be a captured image or live image of a person such as a person that is using the mobile device 702. A live image indicates that the image is being captured or generated by the device and then being displayed on a display in real time. AR makeup 710 is a portion of the user 708 that is added by AR makeup module 606 to simulate the look of the extracted makeup image 612. AR makeup module preview 712 indicates an icon or preview which may be animated to indicate the makeup that will be generated as AR makeup 710 on the image of the user 708 by AR makeup module 606. ”, as disclosed in this section of the prior art, the user is able to see a preview of the make up on them through the device interface.), and wherein the AR component is further configured to generate and display an AR version of the three-dimensional preview of the makeup look selected by the user as applied to the three-dimensional map associated with the face of the user, superimposed on the face of the user in the mirror, via the integrated display component. (Fig, 8 shows the extracted makeup which is labeled element 612 and it is super imposed onto the face of the person in element 806 as a display preview to the user.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang and Venkataraman with Luo in order to generate an AR preview of make up being superimposed onto the face of a user. One skilled in the art would have been motivated to modify Yang and Venkataraman in this manner in order to processing images to generate augmented reality (AR) makeup within a messaging system. (Luo, Col 1, Lines 14-15) Regarding Claim 11, the combination of Yang, Venkataraman and Luo teach the intelligent mirror device of claim 10, where Yang further teaches wherein the three-dimensional preview of the makeup look selected by the user includes a three-dimensional preview of the application process of the makeup look selected by the user. (¶[0049] discloses allowing a user to select a makeup plan and the plan comes with a description of the make up steps and how to achieve the desired look.) Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Yang US PG-Pub(US 20220211163 A1) in view of Venkataraman US PG-Pub(US 20190164341 A1) in view of Lewinson et al. US PG-Pub(US 20230277119 A1). Regarding Claim 16, while the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, they do not explicitly teach wherein the non-transitory computer-readable instructions, when executed by the one or more processors, further cause the one or more processors to: analyze the real-time data associated with the face of a user to identify a skin reaction associated with the application of the one or more cosmetic products; and provide an alert, via the user interface, based on the identified skin reaction. Lewinson teaches wherein the non-transitory computer-readable instructions, when executed by the one or more processors, further cause the one or more processors to: analyze the real-time data associated with the face of a user to identify a skin reaction associated with the application of the one or more cosmetic product (¶[0008]the system enables a user to acquire an image of a patch area through a camera in a portable device, wherein the patch area is located on the skin of a patient, [0045] Step 106 passes the patch test image into a pre-trained artificial intelligence/machine-learning model, which is used to identify areas corresponding to positive reactions in step 107.”, ¶[0006] discloses obtaining an image of a user’s skin and ¶[0045] discloses using machine learning to determine allergic reactions to certain products and ¶[0050] discloses cosmetic products were tested to see if they caused a skin reaction.)and provide an alert, via the user interface, based on the identified skin reaction. ([0047] “Step 110 integrates data from the machine-learning output in step 107 and the supplemental patch layout information from data sets 108 and 109 to label positive reactions according to the chemical corresponding to that reaction position. This information is then displayed to the user in step 111 as the original image with a bounding box overlay corresponding to positive patch results, which can then be labelled according to the specific contact allergens or irritants causing the positive reactions (see FIGS. 2A and 2B).”, ¶[0047] discloses displaying the information to the user if a product returned a positive allergic reaction.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang and Venkataraman with Lewinson in order to determine skin reactions to certain products. One skilled in the art would have been motivated to modify Yang and Venkataraman in this manner in order to identify specific allergens that produced positive skin contact reactions. (Lewinson, Abstract) Claims 12-13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Yang US PG-Pub(US 20220211163 A1) in view of Venkataraman US PG-Pub(US 20190164341 A1) in view of Zarinabad Nooralipour et al. US PG-Pub(US 20240245300 A1). Regarding Claim 12, while the combination of Yang and Venkataraman teach the intelligent mirror device of claim 1, they do not explicitly teach further comprising a light source configured to provide light to the face of the user. Zarinabad Nooralipour teaches further comprising a light source configured to provide light to the face of the user. ([0094] In embodiments, skincare device 100 includes a first light source 109a, configured to emit light onto a first portion of a user's skin) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang and Venkataraman with Zarinabad Nooralipour in order to provide a light source for emitting light onto the face of the user. One skilled in the art would have been motivated to modify Yang and Venkataraman in this manner in order to provide an improved skincare device and methods of operating a skincare device. (Zarinabad Nooralipour, ¶[0004]) Regarding Claim 13, the combination of Yang, Venkataraman and Zarinabad Nooralipour teach the intelligent mirror device of claim 12, where Zarinabad Nooralipour further teaches wherein the non-transitory computer-readable instructions, when executed by the one or more processors, further cause the one or more processors to control the light source to provide particular lighting conditions while the one or more cosmetic products are applied to the facial features of the user. (¶0094], “controller 101 is configured to generate control data 107a, 107b to control light sources 109a, 109b to each emit light having different parameters. Thus, in embodiments, controller 101 is configured to control light sources 109a, 109b such that light emission onto the first portion differs from light emission onto the second portion. In embodiments, skincare device 100 comprises one or more further light sources. In such embodiments, controller 101 may be configured to control each of the one or more further light sources independently (for example, such that each light source emits light having a different one or more parameters to the other light sources).” [0095] “It may be that the first portion and second portion each relate to different parts of the user's skin (for example, different ones of the user's forehead, cheeks, chin, nose, and periocular area”, ¶[0094]-¶[0095] discloses controlling the light source to different parts of the users face.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang and Venkataraman with Zarinabad Nooralipour in order to control the light sources. One skilled in the art would have been motivated to modify Yang and Venkataraman in this manner in order to provide an improved skincare device and methods of operating a skincare device. (Zarinabad Nooralipour, ¶[0004]) Regarding Claim 18, while the combination of Yang and Venkataraman teach the intelligent mirror device of claim 17, where Yang teaches wherein analyzing one or more of: the real-time data associated with the face of the user captured by the one or more sensors, or previously-captured data associated with the face of the user captured by the one or more sensors(¶[0056] discloses acquiring images in real time to process the facial images) , in order to determine one or more of the skin type or the skin health condition associated with the user (¶[0122] discloses using a server to determine skin information for applying the cosmetic product), However, they do not explicitly teach applying a trained machine learning model to one or more of the real-time data associated with the face of the user captured by the one or more sensors, or previously-captured data associated with the face of the user captured by the one or more sensors, to determine one or more of the skin type or the skin health condition associated with the user. Zarinabad Nooralipour teach applying a trained machine learning model to one or more of the real-time data associated with the face of the user captured by the one or more sensors, or previously-captured data associated with the face of the user captured by the one or more sensors, to determine one or more of the skin type or the skin health condition associated with the user. ([0129] “In embodiments, controller 101 is configured to determine the one or more skin features by operating a classifier (i.e. a classification algorithm). In such embodiments, it may be that the classifier has been trained using spectral sensor training data comprising characteristics of a corpus of training users' skin and indications of known skin features of the corpus of training users. In embodiments, the classifier comprises a machine learning agent.”, ¶[0129] discloses using a trained classifier to determine the skin condition of the user) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Yang and Venkataraman with Zarinabad Nooralipour in order to use machine learning to detect skin condition of the user. One skilled in the art would have been motivated to modify Yang and Venkataraman in this manner in order to provide an improved skincare device and methods of operating a skincare device. (Zarinabad Nooralipour, ¶[0004]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN D HOANG whose telephone number is (571)272-4344. The examiner can normally be reached Monday-Friday 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN M VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAN HOANG/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Apr 07, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602835
POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602778
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12602918
LEARNING DATA GENERATING APPARATUS, LEARNING DATA GENERATING METHOD, AND NON-TRANSITORY RECORDING MEDIUM HAVING LEARNING DATA GENERATING PROGRAM RECORDED THEREON
2y 5m to grant Granted Apr 14, 2026
Patent 12592070
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586364
SINGLE IMAGE CONCEPT ENCODER FOR PERSONALIZATION USING A PRETRAINED DIFFUSION MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.3%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month