Prosecution Insights
Last updated: April 19, 2026
Application No. 18/444,343

Smart Makeup Compact

Non-Final OA §102§103§112§DP
Filed
Feb 16, 2024
Examiner
LEMIEUX, IAN L
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Elc Management LLC
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
496 granted / 569 resolved
+25.2% vs TC avg
Moderate +10% lift
Without
With
+9.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§102 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-18 and 43-44 are currently pending in U.S. Patent Application No. 18/444,343 and an Office action on the merits follows. Eligibility Analysis - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Examiner finds analysis of claims 1-18 and 43-44 of the instant application to follow Pathway B of that analysis flow as illustrated in MPEP 2106, and the claims directed to eligible subject matter accordingly. More specifically, a Prong Two finding of ‘Yes’ (the claim recites ‘additional elements’ that serve for integration in view of MPEP 2106.05(a), (b), and/or (e), as distinguished from merely comprising ‘additional elements’/limitations subsumed within an exception and failing for integration- MPEP 2106.05(f), (g) and/or (h)). Examiner recognizes that final ‘providing… guidance’ limitation may individually fall under the Certain Methods of Organizing Human Activity grouping – in view of that that exemplary ‘managing personal behavior or relationships or interactions between people (including … teaching, and following rules or instructions)’. Assuming arguendo that such a characterization/interpretation is precluded (i.e. determining that it is an ‘additional element’ for consideration at Prong Two and not disqualifying it from being determined as an ‘additional element’ in view of a Prong One finding drawing it to an exception), this limitation as broadly recited may also fail for integration if it can be asserted that the limitation does not impose meaningful (given associated breadth) limits on the claim – see those three factors/considerations under MPEP 2106.05(g), (1)-(3), and also MPEP 2106.05(h) if it can be asserted that the output/guidance at best generally links the exception that is ‘teaching a person how to perform a desired makeup application’ to a field-of-use wherein the guidance is delivered by generic computer hardware (for claims 43 and 44 in particular). Examiner also notes that a Prong One analysis as prescribed in the MPEP and demonstrated for Examples 47-49 in the 2024 PEG, https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf may also find that ‘identifying, by one or more processors, one or more facial features of the face of the user on the 3-D map…’ as drawn to the mental processes Abstract Idea grouping (similar to Example 47 claim 2 step (d) detecting one or more anomalies in a data set using a trained ANN – drawn to the mental processes grouping). As the recent examples make clear Prong One analysis is not limited to a single grouping, as all three of the Abstract Idea groupings fall under a single exception – the Abstract Idea exception (as distinguished from e.g. a Law of Nature exception). The ‘identifying’ as recited places few if any constraints on ‘how’ the identifying is to be performed, and accordingly it is not of any complexity precluding such an identifying from being practically performed in the mind (a user may identify features if even of/on a displayed 3-D model/map). This same consideration might hold for that ‘analyzing… real-time data’ if breadth includes a user/operator viewing real-time imagery, and visually/mentally ‘analyzing’ it broadly. The analyzing in question however serves to generate a 3-D map associated with the face of the user, which is relied upon/specifically referenced in subsequent limitations. Examiner does not understand this limitation to be fairly/practically performed in the mind, and accordingly it is evaluated as an ‘additional element’ at Prong Two of Step 2A. See also the August 04, 2025 Memo at page 2 with reference to footnotes 7 and 8, available (https://www.uspto.gov/sites/default/files/documents/memo-101-20250804.pdf). This additional element serves for integration at least in view of MPEP 2106.05(e), if not in view of MPEP 2106.05(a), based on a finding that it reasonably facilitates/realizes one or more improvements at least suggested in Applicant’s disclosure ([0003] of the PGPUB – it facilitates augmented reality (AR) guidance that may improve interactivity and assist a user with better makeup application outcomes). Even if a same/equivalent motivation is present in the prior art (e.g. US 2019/0208892 A1 at paragraph [0002], US 2019/0208887 (relied upon below) para [0002], [0030-0035], [0060] etc.,), compliance with 35 USC 101 and 102/103 involve separate/ distinct inquiries/analysis. Considerations of MPEP 2106.05(b) are of limited pertinence, given the Examiner’s understanding that Applicant’s invention concerns less novel structure for a compact per se (which might have otherwise required differences in routing), and instead a computer implemented method as useable in/with such a compact (see 2106.05(b) and II identifying considerations with respect to how integral any use of the ‘particular machine’ is to accomplishing the associated process/method). For the instant claims the compact is arguably more an object on which the method/process operates. While Examination practice is limited as described above, the courts have declined to adopt those enumerated Abstract Idea groupings identified in the 2019 PEG, and Examiner accordingly encourages prosecution/ amendments avoiding what might be perceived as ‘applying’ broad classes of ML to any ‘new’ field-of-use. See e.g. Recentive Analytics, Inc., v. Fox Corp., Appeal No. 2023-2437, (Fed. Cir. Apr. 18, 2025), Rideshare Displays, Inc v. Lyft, Inc., No. 23-2033, (Fed. Cir. September 29, 2025), etc.. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. At least instant claims 1/43/44 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over one or more claims of copending Application No. 18/779,833 (PGPUB US 2026/0024445 A1) in further view of references of record (and corresponding rationales) as applied in the prior art rejections that follow and where necessary. Examiner notes the claims e.g. 1/19/20 of reference recite “analyzing real-time data associated with a face of a user captured by one or more sensors in order to generate a three-dimensional map associated with the face of the user; identifying one or more facial features of the face of the user on the three- dimensional map associated with the face of the user;” and a final providing guidance limitation that is of a breadth falling under/within (even if narrower) the broader/genus recited in the instant claims. Examiner also notes Fig. 3, steps 302 and 304 of the reference Application closely correspond to the instant Application’s Fig. 4, steps 404 and 406 respectively. One or more dependent claims also appear to have a direct correspondence (e.g. instant claim 2 to reference claim 2, etc.,). This is a provisional nonstatutory double patenting rejection. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 16 recites the limitation in part “cleaning component”, which is indefinite as it is not understood to have an ‘ordinary and customary’ meaning in the art and Applicant’s Specification fails to provide a clear/unambiguous meaning for the term (see MPEP 2173.01, and more specifically that flow chart of 2111.01) when considered in the context of that additionally recited ‘configured to…’ language. PGPUB US 2025/0261743 A1 at e.g. [0039], [0094], etc., establish that the cleaning component is one of components 214, housed within device 100 as a whole – but fail(s) to disclose any exemplary embodiments/structure. [0039] suggests the cleaning component is one that is distinct from other components, such as ‘dispensers’/compartments 106 and ‘the user interface’ 104. Given the lack of exemplary disclosure it is unclear how even a compartment like 106 housing a cleaning solution/product can be ‘configured to’, and so actively (as opposed to being used passively by the device operator), clean or disinfect one or more other components. Claim limitation “cleaning component configured to one or more of: clean or disinfect one or more components of the intelligent cosmetic compact device” (claim 16) has been evaluated under the three-prong test set forth in MPEP § 2181, subsection I, but the result is inconclusive. Thus, it is unclear whether this limitation should be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it is not clear for the generic/nonce term ‘component’ if the term ‘cleaning’ is a structural modifier, and what associated structure is sufficient for actively (as opposed to passively being used for) performing the functional language considered/evaluated at Prong B – “clean or disinfect one or more components of the intelligent compact device”. Examiner notes that Applicant’s disclosure uses the term ‘component’ to describe structural aspects e.g. 214, in addition to arguably non-structural aspects such as AR component ([0034], claim 11) that is of UI 210. The boundaries of this claim limitation are ambiguous; therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. In response to this rejection, applicant must clarify whether this limitation should be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Mere assertion regarding applicant’s intent to invoke or not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph is insufficient. Applicant may: (a) Amend the claim to clearly invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, by reciting “means” or a generic placeholder for means, or by reciting “step.” The “means,” generic placeholder, or “step” must be modified by functional language, and must not be modified by sufficient structure, material, or acts for performing the claimed function; (b) Present a sufficient showing that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, should apply because the claim limitation recites a function to be performed and does not recite sufficient structure, material, or acts to perform that function; (c) Amend the claim to clearly avoid invoking 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, by deleting the function or by reciting sufficient structure, material or acts to perform the recited function; or (d) Present a sufficient showing that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, does not apply because the limitation does not recite a function or does recite a function along with sufficient structure, material or acts to perform that function. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 1. Claims 43-44, 1-3, 5-6, 8-13 and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Besen et al. (US 2019/0208887 A1). As to claim 43, Besen discloses a computer-implemented method for operating an intelligent cosmetic compact device (Fig. 2A, 210 in conjunction with 315, [0071] “In an embodiment, during the instructional unit, client device-based sensors actively monitor the performance of the user and compare output metrics to established standards for corresponding techniques. If it is determined, for example, that the user is applying the incorrect cosmetic, the client device may provide guidance and encouragement to the user via audible alerts, visual alerts, tactile alerts, or a combination thereof. The user may, in turn, respond to the alert and continue makeup application with the confidence of knowing the technique is being performed appropriately”, etc.,), the method comprising: receiving, by one or more processors, an indication of a makeup look selected by the user via a user interface ([0044] “FIG. 4A is a flowchart of sensing and evaluation of sensor inputs during a coaching module, according to an exemplary embodiment. Following user selection of the desired look …”, [0057] “First, a user selects a makeup coaching module S534. For example, the user may select a specific style or look of interest. Alternatively, client device based software may intelligently offer occasion-based inspirations that complement the user”, [0058] “In an embodiment, the client device-based software may offer a variety of styles including, but not limited to, everyday office, first date, movie night, and fine dining. For each occasion, a style or look is offered to the user. To this end, client device software may also incorporate social media content associated with the user's user profile to better inform and predict styles of interest to the user based upon preferences indicated within the user's social media presence”, [0060] “The client device-based software will adjust recommended occasion-based looks accordingly and present them to the user for look selection”, [0076], etc.,); analyzing, by the one or more processors, real-time data, captured by one or more sensors, associated with the face of the user, in order to generate a three-dimensional map associated with the face of the user ([0043] “The front-facing sensors 323 of the client device 315 may include, but are not limited to, an infrared camera 317, an infrared flood illuminator 318, a proximity sensor 322, a dot projector 319, a visible light camera 320, and a visible light flood illuminator 321. The combination of the abovementioned front-facing sensors 323 allows for capture and recreation of realistic three-dimensional models of a user's facial features, skin color, and tone. Such depth-dependent digitization of the face is understood in the art, as evidenced by U.S. Pat. No. 9,582,889 B2, which is incorporated herein by reference. In an embodiment, recognition of the facial features is performed via digital reconstruction of two-dimensional images acquired from a visible light camera”, [0056] “client device-based sensors are used to perform a three-dimensional digitization S531 of the facial features, color, and tone of the user. Client device-based software then adjusts to calibrate the image and color projection according to ambient light”, etc.,); identifying, by the one or more processors, one or more facial features of the face of the user on the three-dimensional map associated with the face of the user ([0059] “In another embodiment, augmented reality capabilities enable the user to realize a look prior to look selection. Utilizing client device-based sensors, the client device utilizes prior and active depth mapping, including light filtering, to provide a realistic rendering of what a style may look like on a user”; [0069], [0046] “As the user applies makeup, the client device actively monitors user position, orientation, movement and facial features S442. Client device-based sensors, controlled by client device-based software, generate data including, but not limited to, user facial features, user head orientation … Stored data is accessed during display projection of future images to improve spatial projection of templated shapes relative to prior instances S443. To this end, stored data, including that which is related to relative user position and orientation, are used to predict future positions and orientations of the user so that adjustments to the display projection are more intuitive, allowing templated shapes to follow the contours and movements of the user S444. For example, initially, a user is applying makeup to the right side of the face and the client device display is projecting a templated shape onto the semi-transparent display, accordingly. As the user moves the head to more easily view a section of the face, client device-based software recognizes the movement and adjusts the display projection accordingly. There exists, however, delay in the rendering as the client device-based software generates the display projection. With subsequent use, the client device-based software will generate a library of prior user motions that can be called upon during future instances of similar motions. In this way, as the number of instances of a certain motion and stored data increase, the client device-based software will be able to better predict the velocity and direction with which a movement is occurring, thus eliminating lag time in display projection generation”, [0070] “These templated shapes, indicated by white outlined objects in FIG. 6B, are customized to the facial features of the user and are presented similarly to a "paint-by-numbers" approach, wherein each shape corresponds to a specific makeup cosmetic”, etc., Examiner understands this ‘identifying’ with reference to Applicant’s block 406 and [0080], to facilitate the AR rendering that is used to convey e.g. that ‘preview’ of [0081] of Applicant’s PGPUB, among other displayed content (not only a preview but also during application) – and Besen at least suggests the same as those templated shapes are “customized to the facial features of the user” and projected so as best align with the positions and orientations of user features in real-time, as the user/operator moves/performs portions of a makeup/cosmetic application process); and providing, by the one or more processors, via the user interface, guidance associated with applying one or more cosmetic products to the facial features of the user in order to achieve the makeup look selected by the user ([0061] “An instructional unit can include, but is not limited to, video tutorials, projection of templated shapes, or a combination thereof. Next, user controlled step-by-step makeup application steps are projected from the client device display and are visible through the semi-transparent display S537”, [0070], [0071] “In an embodiment, during the instructional unit, client device-based sensors actively monitor the performance of the user and compare output metrics to established standards for corresponding techniques. If it is determined, for example, that the user is applying the incorrect cosmetic, the client device may provide guidance and encouragement to the user via audible alerts, visual alerts, tactile alerts, or a combination thereof. The user may, in turn, respond to the alert and continue makeup application with the confidence of knowing the technique is being performed appropriately”, etc.,). As to claim 44, this claim is the non-transitory CRM claim corresponding to the method of claim 43 and is rejected accordingly. Besen further discloses memory associated with client device 715 in addition to that of e.g. remote system structure 750 (see Fig. 7). As to claim 1, this claim is the system/device claim corresponding to the method of claim 43 and corresponding limitations are rejected accordingly – e.g. those same steps as performed by those one or more processors. Claim 1 requires additional structural limitations which Besen further discloses, i.e. a system/device comprising a portable cosmetic compact housing (Figures 1 and 2A, portable makeup/cosmetic compact 210 comprising housing sub-component 216 for housing client device/smartphone 315, housing for color palette 211, upper lid 213, mirror 201, etc.,) configured to be opened and closed (Figs. 2A, 2C and 2D, [0037], [0037] “From the user's perspective, both a mirror reflection of the user and client device-generated objects will be visible simultaneously on through the semitransparent display 205”, [0038] opens/closes via flexible hinge 209, etc.,), the portable housing comprising: a display that is accessible when the portable cosmetic compact housing is opened (Figs. 1, 2A, [0037], display of 315 is viewable/accessible through 205 when compact 210 is opened); one or more compartments for storing cosmetic products that are accessible when the portable cosmetic compact housing is opened (211); a user interface (UI of 315, [0037] “To implement the interactive user interface of the client device, the client device, with activated coaching module, is positioned within the client device housing 216”, [0073] “Further, the user interface or the client device can display tutorials on fundamentals of makeup application. The user interface can create and download protocols for a regimen or routine. The user interface can train, track usage and compare the tracked usage to the protocol, the regimen, and the routine. The user interface can calculate a score based on the tracked usage. The user interface can store the scores and the tracked usage of the coaching software in the memory of the client device. Moreover, the user interface can be used to make a purchase of any products related to the makeup products registered within the client device-based software as well as recommendations of color tones, product lines, and other products related to the current style, look, or future experimental techniques”; Examiner understands the ‘UI’ to implicitly be structural in nature, as it is a distinct structural limitation of the device of claim 1, like that/those display, compartment(s), sensor, processor, and memory – Applicant’s interface 210 is disclosed as housing the ‘AR component’, and ‘component’ is arguably a nonce term as it concerns 112(f) invocation, MPEP 2181, and the UI may accordingly serve as a structural modifier for disclosed ‘software component’ embodiments); one or more sensors configured to capture real-time data associated with a face of a user (Fig. 3B, [0043] “The front-facing sensors 323 of the client device 315 may include, but are not limited to, an infrared camera 317, an infrared flood illuminator 318, a proximity sensor 322, a dot projector 319, a visible light camera 320, and a visible light flood illuminator 321”); one or more processors (inherent as required for 315, and also as part of remote portions of system as a whole/server(s) 750); and one or more memories ([0046] “store the acquired data to local storage, cloud-based storage, or a combination thereof”, [0053], [0073], Fig. 7, etc.,). As to claim 2, Besen discloses the device of claim 1. Besen further discloses the device wherein the one or more sensors include one or more of a camera or a depth sensor ([0043] camera(s) 320/317, dot projector 319 for depth mapping, [0059] “Utilizing client device-based sensors, the client device utilizes prior and active depth mapping”, etc.,). As to claim 3, Besen discloses the device of claim 1. Besen further discloses the device wherein the user interface includes a haptic feedback component, and wherein providing guidance associated with applying the one or more cosmetic products to the facial features of the user includes providing haptic guidance via the haptic feedback component ([0049] “According to an embodiment, data generated from client device-based sensors and stored to local storage, cloud-based storage, or a combination thereof, may be utilized to provide real-time feedback to the user regarding user performance in the form of visual commands, audible commands, tactile commands, or a combination thereof”, [0071] “If it is determined, for example, that the user is applying the incorrect cosmetic, the client device may provide guidance and encouragement to the user via audible alerts, visual alerts, tactile alerts, or a combination thereof”, Examiner understands haptic/tactile to be synonymous in the context of feedback/commands for directing the user – see also Hong et al. (US 2024/0108119 A1) as applied below for the case of claim 4). As to claim 5, Besen discloses the device of claim 1. Besen further discloses the device wherein the user interface includes an audio component, and wherein providing guidance associated with applying the one or more cosmetic products to the facial features of the user includes providing audio guidance via the audio component (see [0049], [0071] audio command/alert embodiment(s), [0054-055] output of 315 comprises video and audio data). As to claim 6, Besen discloses the device of claim 1. Besen further discloses the device wherein the providing guidance associated with applying the one or more cosmetic products to the facial features of the user includes providing visual guidance via the display (see [0049], [0071] video command/alert embodiment(s), in further view of that video tutorial disclosure of e.g. [0061] and the display of those projected template shapes [0046]). As to claim 8, Besen discloses the device of claim 1. Besen discloses the device further comprising a communication interface configured to communicate with a mobile device, external to the intelligent cosmetic compact device (Besen 315 comprises a communication interface configured to communicate with other/additional mobile devices, external/remotely located from 210, for the instances that detachable component 315 of 210 is situated within housing 216 or otherwise). As to claim 9, Besen discloses the device of claim 8. Besen further discloses the device wherein the communication interface is a wired communication interface (Fig. 8, connector 827 wired/direct electrical connection with micro/mini/USB port of 315, [0074] “Such electrical connection of a makeup compact to a client device is understood in the art, as evidenced by U.S. Pat. No. 9,692,864 B1, which is incorporated herein by reference”). As to claim 10, Besen discloses the device of claim 8. Besen further discloses the device wherein the communication interface is a wireless communication interface (common to 315, [0052] “The proximity sensor housed within the client device detects the presence of the client device housing and sends a signal to the client device-based software to begin the coaching module S457. In an embodiment, the client device and client device housing further comprise wireless identification tags. These wireless identification tags, including, but not limited to, near field communication devices, provide a unique makeup compact identifier that would expedite the user onboarding experience, instantly identifying the available makeup palette”, etc., while not required/relied upon, see also Truong US2021/0195713 A1 Fig. 2, wireless communication 162 of compact 150 for communication with 104). As to claim 11, Besen discloses the device of claim 1. Besen further discloses the device wherein the user interface includes an augmented reality (AR) component configured to generate and display an AR version of the three-dimensional map associated with the face of the user (Fig. 6B 637, AR disclosure of [0059], [0069], [0046], [0070] “These templated shapes, indicated by white outlined objects in FIG. 6B, are customized to the facial features of the user and are presented similarly to a "paint-by-numbers" approach, wherein each shape corresponds to a specific makeup cosmetic”, etc.,). As to claim 12, Besen discloses the device of claim 11. Besen further discloses the device wherein the non-transitory computer-readable instructions, when executed by the one or more processors, further cause the one or more processors to generate a three-dimensional preview of the makeup look selected by the user as applied to the three-dimensional map associated with the face of the user, and wherein the AR component is further configured to generate and display an AR version of the three-dimensional preview of the makeup look ([0059] “In another embodiment, augmented reality capabilities enable the user to realize a look prior to look selection. Utilizing client device-based sensors, the client device utilizes prior and active depth mapping, including light filtering, to provide a realistic rendering of what a style may look like on a user”, [0069] “To aid the user in style and look selection, augmented reality capabilities enable the user to realize a style prior to look selection. Utilizing client device-based sensors, the client device utilizes prior and active depth mapping, including light filtering, to provide a realistic rendering of what a style may look like on a user 635”). As to claim 13, Besen discloses the device of claim 12. Besen further discloses the device wherein the three-dimensional preview of the makeup look selected by the user includes a three-dimensional preview of the application process of the makeup look selected by the user ([0061] “An instructional unit can include, but is not limited to, video tutorials, projection of templated shapes, or a combination thereof. Next, user controlled step-by-step makeup application steps are projected from the client device display and are visible through the semi-transparent display S537. A video tutorial of a first step of the instructional unit is displayed. Following the video tutorial, the appropriate templated shapes are projected onto the semitransparent display. As the user completes each makeup application step according to the templated shapes projected from the client device display, the user indicates as much to the client device-based software via audible command, visual command, tactile command, or a combination thereof. If additional steps are required to complete the instructional unit ( e.g. if the instructional unit requires makeup application of more than one cosmetic) S538, the instructional unit begins the next step, including the next step of the video tutorial and the appropriate templated shapes. If no additional steps are required S538, makeup application of the current instructional unit has ended”). As to claim 17, Besen discloses the device of claim 1. Besen discloses the device further comprising a light source configured to provide light to the face of the user ([0043] visible light flood illuminator 321 of 315). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Besen et al. (US 2019/0208887 A1) in view of Hong et al. (US 2024/0108119 A1). As to claim 4, Besen discloses the device of claim 3. Besen suggests the device wherein the haptic guidance includes patterns of haptic feedback associated with applying respective cosmetic products to the respective facial features of the user ([0071] “If it is determined, for example, that the user is applying the incorrect cosmetic, the client device may provide guidance and encouragement to the user via audible alerts, visual alerts, tactile alerts, or a combination thereof”; Examiner notes also that the ‘to the respective features’ appears at the minimum suggested by Besen in view of e.g. Fig. 6B and the manner in which those projected overlays pertain to different features – e.g. white overlay for area underneath eyebrow, eye lid, and inner corner of the eye; It also stands to reason that for a user to be efficiently/effectively guided by the feedback of Besen, different and readily discernable feedback, for various/different situations encountered by the user during the process of achieving the desired look, would be required). Hong further evidences the obvious nature of haptic guidance including patterns of haptic feedback associated with applying respective cosmetic products to the respective facial features of the user (see path presentation engine 118, path determination engine 114, and face detection engine 112, [0013] “From this recording, a relative applicator path is determined that specifies locations of a makeup applicator during the application. The relative applicator path is defined with respect to distances from various facial landmarks, such that the relative applicator path can be presented with respect to other faces by calculating the distances from the various facial landmarks of the other faces. Presentations including, among other things, ghost outlines of applicators, audio feedback, haptic feedback, or visual prompts may be generated in order to help a second subject guide an applicator along the relative applicator path and thereby improve the application of makeup by the second subject”, [0055] “if the second applicator 602 is determined to be outside of a predetermined range around the relative applicator path, an auditory or haptic indication may be provided to prompt movement of the second applicator 602 back to the relative applicator path. Also, in embodiments wherein the relative applicator path includes an amount of pressure applied by the applicator, the presentation may include a visual, auditory, or haptic indication when it is determined that an amount of pressure being applied with the second applicator 602 is different from the amount of pressure indicated by the relative applicator path”, etc.,). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Besen to such that the tactile/haptic feedback disclosed therein further comprises that as provided for an applicator component distinct from 315, and including patterns of haptic feedback associated with applying respective cosmetic products to the respective facial features of the user as taught/suggested by Hong, the motivation as similarly taught/suggested therein that such feedback may further assist the user in a more exact/precise manipulation of applicators frequently used in conjunction with the compact device, and further defined with respect to the user specific relative applicator path (Hong [0013]). 2. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Besen et al. (US 2019/0208887 A1). As to claim 7, Besen discloses the device of claim 6. Besen further discloses the device further comprising an accelerometer, and wherein providing the visual guidance via the display includes adjusting an orientation of the visual guidance provided via the display based on data captured by the accelerometer (inherent to 315, in view of the manner in which most smartphones, and Apple iPhone as disclosed [0043] comprise accelerometers as a standard feature, for detecting device orientation, changes in motion etc., and e.g. modifying display functions/orientation accordingly (in addition to crash/fall/drop detection among others) – Examiner Takes Official Notice (2144.03) to the manner in which smartphones typically/commonly adjust displayed content orientation in response to accelerometer data and detected changes in device orientation). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Besen so as to rely on an accelerometer built in to 315 to optimize/adjust the orientation of displayed content, since such a modification would amount to no more than applying known techniques to yield a predictable/expected result further characterized by a reasonable expectation of success. See MPEP 2143 Example Rationale A, with reference to KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007) (at page 401 of the bound volume “a combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results”). 3. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Besen et al. (US 2019/0208887 A1) in view of Jung (US 2020/0405039 A1). As to claim 14, Besen discloses the device of claim 1. Besen fails to disclose the device as further comprising one or more temperature control components configured to control temperatures associated with the one or more compartments for storing the one or more cosmetic products within a particular range of temperatures. Jung evidences the obvious nature of one or more temperature control components configured to control temperatures associated with the one or more compartments for storing the one or more cosmetic products within a particular range of temperatures ([0063] “Additionally, the controlling means 90 senses the temperature around the cosmetics contained in the cartridge 30 inside the body 11 by a temperature sensor, and controls the operation of the cooling means 80 accordingly”). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Besen to further comprise one or more temperature control components configured to control temperatures associated with the one or more compartments for storing cosmetic products within a particular range of temperatures as taught/suggested by Jung, the motivation as similarly taught/suggested therein that such a temperature control means ensures the associated cosmetic products maintain their integrity/useability for the user. 4. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Besen et al. (US 2019/0208887 A1) in view of Kreuzer et al. (US 2020/0387942 A1). As to claim 15, Besen discloses the device of claim 1. Besen further suggests the device wherein the non-transitory computer-readable instructions, when executed by the one or more processors, further cause the one or more processors to: track the usage of the one or more cosmetic products from the one or more compartments (Besen [0073] suggests tracking product usage as it pertains to a tracked usage of the coaching software and those various protocols, routines, and/or regimens selected/performed by the user – wherein at least which products are used, in addition to use frequency/amount, is readily measured in association with which and how frequently desired looks are selected by the user); provide a notification soliciting related makeup products, via the user interface ([0072-0073]). Besen fails to explicitly disclose determining a low-stock status/requirement for refill of one or more cosmetic products and a corresponding prompt/notification distinct from one that is soliciting makeup products more broadly (i.e. soliciting more specifically one that is low/requiring refill/replenishment). Kreuzer however evidences the obvious nature of determining a low-stock status/requirement for refill of one or more cosmetic products and a corresponding prompt/notification provided on the basis of intelligently tracked use (Fig. 7 706-708, store product use data in user profile and analyzing use event data to generate feedback information, in view of Fig. 500 recommendation e.g. “Don’t forget to replace your skin care product after 5 uses”, [0052] “Still further, the database 210 may store a set of rules regarding the appropriate frequency, duration, and manner of use for a particular activity. The set of rules may also include an estimated total number of times the activity may be performed and/or an estimated total duration over multiple instances of performing the activity before products related to the activity need to be replenished, such as the number of showers before the user needs to replace the soap and shampoo. In addition to sets of rules, the database 210 may store machine learning models for determining the appropriate frequency, duration, and manner of use for the particular activity that is specific to a particular user based on the user's previous patterns of use and/or the results experienced by the user”, [0056] “The user feedback information may include a recommendation to replenish the skin care product (e.g., after a threshold number of uses of the skin care product which in some instances may be determined via the activity data, or when the skin care product exceeds a threshold age according to the set of rules and/or the machine learning models)”, [0063], etc.,) It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Besen such that product solicitation of e.g. [0072-0073], more specifically concerns at least one product characterized by a low-stock status/requirement for refill/replenishment based on tracked use as taught/ suggested by Kreuzer, the motivation as similarly taught/suggested therein and readily recognized by POSITA, that such a solicitation might be more effective given the users usage history and further serve to ensure the user does not run out of the product and/or has sufficient product for subsequent application(s). 5. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Besen et al. (US 2019/0208887 A1) in view of Ezzina et al. (US 2023/0147318 A1). As to claim 16, Besen discloses the device of claim 1. Besen fails to explicitly disclose the device further comprising a cleaning component configured to one or more of: clean or disinfect one or more components of the intelligent cosmetic compact device. Ezzina however evidences the obvious nature of a case/housing for cosmetic product further comprising a cleaning component configured to clean or disinfect one or more components of said case/housing (Figs. 5-7, [0014-0015] “To limit such a dispersion during the transport, it is in particular known to place an applicator-typically a sponge-above the sifter so that at least one portion of the cosmetic product dispersed through the orifices is captured by the sponge”, [0015] “However, the sponge does not completely prevent the dispersion of the powder cosmetic product out of the cavity, which poses a problem for the cleanliness of the case”, [0023] “To solve this problem, it has already been proposed to use a rigid plug attached to the cover and equipped with fins that scrape the sifter when the cover is screwed. The sifter used is a rigid sifter”, [0024] “However, such a rigid plug does not allow for satisfactory cleaning of the sifter because once the fins are in close contact with the sifter, they form a stop which prevents the cover from being tightened further”, [0116] “the contact face 76 is intended to rub against the sifter 22 during the rotation of the cover 12. This rubbing allows to push the grains of cosmetic product through the orifices in the sifter 22 back into the cavity 16. Thus, the sifter 22 and the contact face 76 are cleaned and do not comprise cosmetic product when the cover 12 is opened again”, [0124], etc.,). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system of Besen so as to further comprise means for controlling cleanliness of the case/compact/housing and/or the dispersion of cosmetic product as taught/suggested by Ezzina, the motivation as similarly taught/suggested therein [0021] that reducing a defect in cleanliness may serve to avoid/reduce user disappointment/ dissatisfaction and/or product waste. 6. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Besen et al. (US 2019/0208887 A1) in view of Truong (US 2021/0195713 A1). As to claim 18, Besen discloses the device of claim 17. Besen fails to explicitly disclose the device wherein e.g. 315 provides particular lighting conditions while the one or more cosmetic products are applied to the facial features of users (absent a broad interpretation wherein the ‘particular’ lighting condition is, e.g. ‘on’ as distinguished from otherwise). Truong evidences the obvious nature of compact 150 configured to provide particular lighting conditions facilitating cosmetic application (Fig. 2, [0001] “lighting system can be embodied in a carried accessory, such as cosmetic compact case, a make-up brush, or a mirror having a computer system with network connectivity. In one embodiment, the lighting system connects to a server computing system that stores a plurality of light conditions. Light usually affects the way cosmetics and make-up are perceived by the user and observers. Generally, a user does not apply cosmetics or make-up under lighting conditions that are similar to the lighting conditions in which a user will be present”, [0044] “Then, the computing device 164 can issue instructions for adjusting the output of the light emitting diodes 158 in order so that the combined ambient lighting conditions and the light produced by the light emitting diodes together resembles a simulated lighting condition which is produced proximate to the compact case 150 so that the user can see what the make-up will look like in a selected simulated lighting conditions. The combined simulated lighting conditions can be verified by the light sensors 156 or else, the light emitting diodes 158 output may be further adjusted. The compact case 150 may include controls, such as button 166 to scroll through certain pre-selected locations, the lighting conditions of which may be simulated. Then, when a location is selected, the light emitting diodes 158 will be controlled to output light that when combined with the ambient light will result in the lighting conditions of the selected location”, etc.,). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Besen to further provide particular lighting conditions so as to facilitate a desired cosmetic application as taught/suggested by Truong, the motivation as similarly taught/suggested therein that such a configurable lighting enables the user/operator to better achieve the desired look as it is likely be perceived under expected/future/destination specific lighting conditions (Truong [0001-0003]). Additional References Prior art made of record and not relied upon that is considered pertinent to applicant's disclosure: Additionally cited references (see attached PTO-892) otherwise not relied upon above have been made of record in view of the manner in which they evidence the general state of the art. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to IAN L LEMIEUX whose telephone number is (571)270-5796. The examiner can normally be reached Mon - Fri 9:00 - 6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IAN L LEMIEUX/Primary Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602825
Human body positioning method based on multi-perspectives and lighting system
2y 5m to grant Granted Apr 14, 2026
Patent 12592086
POSE DETERMINING METHOD AND RELATED DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586397
METHOD AND APPARATUS EMPLOYING FONT SIZE DETERMINATION FOR RESOLUTION-INDEPENDENT RENDERED TEXT FOR ELECTRONIC DOCUMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579840
BEHAVIOR ESTIMATION DEVICE, BEHAVIOR ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573086
CONTROL METHOD, RECORDING MEDIUM, METHOD FOR MANUFACTURING PRODUCT, AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
97%
With Interview (+9.6%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month