Prosecution Insights
Last updated: April 19, 2026
Application No. 17/564,004

SKIN TONE DETERMINATION AND FILTERING

Non-Final OA §103
Filed
Dec 28, 2021
Examiner
WANG, JIN CHENG
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Pinterest Inc.
OA Round
4 (Non-Final)
59%
Grant Probability
Moderate
4-5
OA Rounds
3y 7m
To Grant
69%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
492 granted / 832 resolved
-2.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
40 currently pending
Career history
872
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 832 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/17/2025 has been entered. The claims 1, 2, 7, 10-13, 17 and 20 have been amended. The claims 3, 8, 9, 18 and 19 have been cancelled. The claims 1, 2, 4-7, 10-17 and 20 are pending in the current application. Response to Arguments Applicant’s arguments filed 11/17/2025 with respect to the previously cited Sartori Odizzio reference are not found persuasive. Odizzio teaches at FIGS. 9D-9G the selection of the range of colors in the same manner as applicant’s range of selectable colors at FIG. 5 and Paragraph 0041. Applicant’s skin tone is merely disclosed in the Specification as a range of selectable colors associated with the content items (which can be embodied as either virtual makeup products or looks (the stored composite facial images with the virtual makeup products applied). The claimed target skin tone is interpreted in the same manner as appellant’s specification at FIG. 5 and Paragraph 0041 (namely the color classes 502-504-506-508 of FIG. 5 associated with the content items wherein the content items may refer to the virtual cosmetic product items or virtual looks (the composite facial images applied with the selected virtual makeup products with the selected colors—skin tones). The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Such primary color of the virtual products included in the looks is selected by the user via the user interface of FIGS. 9A-9H when generating each of the looks (the composite facial images). Each of the recommended looks displayed in the menu 916i of FIG. 9i has the dominant/primary skin color, which is based on the primary color of its associated virtual makeup product. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. Odizzio discloses at each of FIGS. 9D-9G and Paragraph 0088-000089 that menu 918d includes an option to select from a range of colors (skin tones), shade in a particular line of foundation and menu 918g includes an option to select from a range of colors (lip skin colors or lip skin tones) for a particular line of lipstick products to be applied to the lips (lip skin) in the facial image 908. The range of colors of FIGS. 9D-9G are presented in the same manner as applicant’s FIG. 5’s range of colors 502-508 in appellant’s specification. For example, in FIG. 9F, Odizzio teaches 5-color eye shadow palette with 5-classes of colors ranging from dark color to medium and light color and thus has taught the eyeshadow skin colors. The virtual lipstick product has a primary lip color (primary skin color). The virtual lipstick product when applied to the composite facial image 908 is then saved to the database as a look (Paragraph 0039) so that the saved look has also a primary skin color. Similarly, in FIG. 9G, Odizzio teaches 6-color lipstick palette ranging from dark color to medium color and light color and therefore has taught the lip skin colors. In FIG. 9H, Odizzio teaches 5-color eye shadow palette with 5-classes of eye shadow skin colors ranging from dark color to medium and light color (associated with eyeshadow skin). The virtual eyeshadow product has a primary eyeshadow skin color. The virtual eyeshadow product when applied to the composite facial image 908 is then saved to the database as a look (Paragraph 0039) so that the saved look has also a primary skin color. Moreover, Odizzio teaches at FIGS. 9D-9G and Paragraph 0088-0089 that, in response, the user is presented with a graphical menu 916d showing at least one virtual foundation (cheek) product or product-line….in response, the user is presented with a graphical element 916f showing a selected virtual makeup product that includes a pallet of five colors (cheek skin colors)….in response the user is presented with a graphical element 916g showing at least one virtual lipstick product or product line. The virtual foundation product has the primary cheek color. The virtual foundation product when applied to the composite facial image 908 is then saved to the database as a look (Paragraph 0039) so that the saved look has also a primary skin color. Thus, Odizzio clearly shows at FIGS. 9D-9G the claimed feature of determining that the respective skin tone in the graphical menu 918d-918g associated with the additional content items as the one or more graphical elements within 916d-916g corresponding to the selected skin tone (the range of colors selected in the graphical menu 918d-918g of FIGS. 9D-9G). B) The claimed skin tone is met by Sartori-Odizzio The skin tone can be rightfully mapped to the terms disclosed in appellant’s specification at Paragraph 0041 and FIG. 5. As such, the claimed skin tone is also met by the previously user selected eyeshadow/lip/cheek color/tone of the virtual makeup products in each of the user interface of FIGS. 9D-9H applied to the composite facial image 908 so as to change the skin tone of the composite facial image 908 and then stored as custom looks in the virtual makeup platform and then loaded into the user interface of FIGS. 9A-9J as a loaded image so that second plurality of content items (recommended looks of FIGS. 9I-9J) can be displayed with the skin tone of the recommended looks matches with the skin tone of the loaded composite image 908. In a non-limiting example, the claimed target skin tone can be directly mapped to the skin tone of the composite facial image 908 in FIGS. 9A-9J that is aesthetic pleasing to the user. The target skin tone relating to the composite facial image 908 may be selected via the user interaction in any one of the user interfaces of FIGS. 9A-9J due to the application of the virtual makeup products (with a primary color) or due to the use of the filter tool 940 and the remover tool 950. For example, Odizzio teaches at Paragraph 0091 in response to receiving a makeup product selection and a request to apply the selected virtual makeup product with a primary color, a composite facial image 908 including a base image and a makeup image with the applied virtual makeup product may be displayed to the user via the graphical user interface. Odizzio teaches at Paragraph 0088-0089 selecting a primary color of one of the virtual lipstick products or the virtual eyeshadow products or the virtual foundation products so that the selected virtual makeup product(s) can be applied via the user interface to change the skin tone of the composite facial image 908. Odizzio teaches at Paragraph 0093 changing a skin tone of the composite facial image 908 via the user interaction of FIG. 9A (which also applies to the user interface of FIG. 9I or 9J for generating the recommended looks gleaned from all looks in the database of Paragraph 0038 and thus the recommended looks meet the claimed second plurality of content items). Odizzio teaches that if the user does not like the selected makeup product as applied (resulting in a first primary skin tone of the facial image 908 if the generated look is saved to the database) or wishes to try a different makeup product (with a second primary skin tone if the generated look is stored to the database), the user can elect to remove the applied makeup product by selecting remover option 950. Odizzio remover option 950 may act as a reset button, for example, removing all or some of the makeup images forming the composite image 908 in response to a user selection and the user can use the interactive tool to finely remove specific areas of virtually applied makeup product (thus changing the skin tone of the composite image 908). The user interaction tool such as the filter tool 940 or remover tool 950 of FIG. 9A is also shown in FIG. 9D-9H. Accordingly the skin tone of the composite image 908 (a look) can be changed via the user interaction by changing one of the eyeshadow color, lip color and facial (cheek) foundation color or by changing the applied virtual makeup products using the filter tool 940 or the remover tool 950 so as to change the skin tone of the composite image 908. After changing the skin tone of the composite image 908, Odizzio teaches at FIGS. 9I-9J and Paragraph 0095 displaying the second plurality of content items (recommended looks) with the primary skin tone matching with the skin tone of the composite facial image 908. The second plurality of content items (recommended looks) are produced in FIGS. 9I-9J in the menu 916I as a result of matching of the skin color/tone of the recommended looks (facial images) with the skin color/tone of the loaded facial image (see Paragraph 0095). The claimed target skin tone is met by the skin tone of Paragraph 0095. Moreover, the recommend looks are a part of all looks (first plurality of content items) stored in the memory/storage according to Paragraph 0039. In yet another non-limiting example, the claimed skin tone can be mapped to the skin tone of the composite image 908 directly loaded from the storage of the custom looks (all of the stored looks meet the claimed first plurality of content items) in the virtual makeup platform. By selecting a particular loaded image (a custom look that had been previously stored by the user to the virtual makeup platform) having a particular skin color/tone from a storage in the virtual makeup platform, the particular custom look can be loaded into the user interface as the composite facial image 908 as shown in each of FIGS. 9D-9H via the user interaction. By selecting the loaded image (a custom look) with a particular skin color/tone from the storage of the virtual makeup platform, the user also has selected the skin color/tone associated with the loaded image (the loaded custom look). Such skin color/tone of the loaded image also meets the claimed target skin tone/color because it is used to query for the recommended looks and the associated virtual makeup products (see Paragraph 0095-0096). The second plurality of the content items (recommended looks) are produced in FIGS. 9I-9J in the menu 916I as a result of matching of the primary skin color/tone of the recommended looks with the target skin color/tone of the loaded image (see Paragraph 0095). The claimed target skin tone is also described in Paragraph 0095. Additionally, the skin tone can be rightfully mapped to the meaning in appellant’s specification at Paragraph 0041 and FIG. 5. As such, the claimed dominant skin tone is also met by the previously user selected eyeshadow/lip/cheek color/tone of the virtual makeup products in each of the user interface of FIGS. 9D-9H applied to the composite facial image 908 so as to change the skin tone of the composite facial image 908 and then stored as custom looks in the virtual makeup platform and then loaded into the user interface of FIGS. 9A-9J as a loaded image so that second plurality of the content items (recommended looks of FIGS. 9I-9J) can be displayed with the skin tone of the recommended looks matches with the skin tone of the loaded composite image 908. Odizzio teaches at Paragraph 0044 “particular combinations of multiple virtual makeup products (e.g., a virtual eyeshadow and a virtual blush) may be stored as “looks” 250” (stored looks meeting the claimed first plurality of content items) and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products with primary color according to Paragraph 0096) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Custom looks meet the claimed first plurality of content items. Accordingly, each custom look stored in the virtual makeup platform 120 has a primary/dominant skin color/tone. Odizzio teaches at Paragraph 0086 that the user interface enables a user to upload an image from a local storage at a client device 102 or a remote server of the virtual makeup platform 120. The application of multiple virtual makeup products with primary color as applied to the facial images have been shown in each of the user interfaces of FIGS. 9D-9H in detail in response to the user selection of a range of colors relating to the virtual makeup products. As such, each custom look can be customized with a selected primary skin/eyeshadow/lip/cheek color/tone and can be stored in the virtual makeup platform 120 and one of the custom looks applied with the custom skin tone from a local storage of the client device can be loaded as a composite facial image 908 from the local storage. The particular custom look has been applied with the one or more virtual eyeshadow/lipstick/foundation products with a primary eyeshadow/lip/cheek color/tone. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Each one of the recommended looks displayed in the menu 916i of FIG. 9i has a dominant skin color. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. Sartori Odizzio teaches: Obtaining a query for a plurality of content items ( Odizzio teaches obtaining a query for a plurality of virtual makeup looks (e.g., looks 250). Odizzio teaches loading the composite facial image with target makeup having a target skin tone to query for the recommended looks (content items) applied with one or more virtual makeup product(s) having the primary skin tone/color. Odizzio teaches at FIGS. 9A-9J and Paragraph 0094-0097 that the user interface obtains a query for a plurality of virtual makeup looks. Odizzio teaches at Paragraph 0044 that particular combinations of multiple virtual makeup products may be stored as looks 250 and at Paragraph 0094 that FIG. 9j shows a screen capture 900j of the example graphical user interface showing selectable “Looks” via a graphical menu 916i in response to a user input selecting the “Looks” category via menu 906. It is noted that each Look is a facial image having a characteristics such as a skin tone as disclosed at Paragraph 0095 “platform 120 may generate one or more recommend looks that are specifically tailed to the characteristics of the face of the loaded image”. The loaded image is the image 908 of FIG. 9J); Causing, via a first user interface, a plurality of target skin tones to be presented on a client device associated with a user, the plurality of target skin tones defined by a plurality of predefined color value thresholds ( Odizzio’s range of colors correspond to the claimed predefined color value thresholds. A range of colors means [c1, c2] where c1 represents a lower threshold and c2 represents an upper threshold of color values. With respect to the new claim limitation as highlighted amended after the Board Decision rendered 9/17/2025, Odizzio teaches at Paragraph 0088-0089 that as shown in screen 900d of FIG. 9D, menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. As shown at screen capture 900g in FIG. 9G a user has selected “lips” via menu 906 and “lipstick” via menu 906g. In response the user is presented with a graphical element 916g showing at least one virtual lipstick product or product line and graphical menu 918g including an option to select from a range of colors for a particular line of lipstick products. Odizzio teaches at Paragraph 0094 that a look is a particular combination of two or more makeup products and end users can configure their own custom looks by selecting various combinations of makeup products. Since each makeup product is associated with one of the target skin tones as disclosed at FIGS. 9A-9J, the custom looks are associated with the target skin tones. The skin tone relating to the composite facial image 908 may be selected via the user interaction in any one of the user interfaces of FIGS. 9A-9J. For example, Odizzio teaches at Paragraph 0091 in response to receiving a makeup product selection and a request to apply the selected product, a composite image 908 including a base image and a makeup image based on the selected product may be displayed to the user via the graphical user interface (Odizzio teaches at Paragraph 0088-0089 selecting a color of the virtual lipstick products or the virtual eyeshadow products or the virtual foundation products so that the selected virtual makeup products can be applied via the user interface to change the skin tone of the composite image 908). Odizzio teaches at Paragraph 0093 changing a skin tone of the composite image 908 via the user interaction in the user interface of FIG. 9A (which also applies to the user interface of FIG. 9I or 9J for generating the additional recommended looks). Odizzio teaches that if the user does not like the selected makeup product as applied (resulting in a first skin tone of the facial image 908) or wishes to try a different makeup product (with a second skin tone), the user can elect to remove the applied makeup product by selecting remover option 950. Odizzio remover option 950 may act as a reset button, for example, removing all or some of the makeup images forming the composite image 908 in response to a user selection and the user can use the interactive tool to finely remove specific areas of virtually applied makeup product (thus changing the skin tone of the composite image 908). The user interaction tool such as the filter tool 940 or remover tool 950 of FIG. 9A is also shown in FIG. 9D-9H. Accordingly the skin tone of the composite image 908 (a look) can be changed via the user interaction by changing one of the eyeshadow color, lip color and facial (cheek) foundation color. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced custom look (a composite image) can be loaded as a load image disclosed in Paragraph 0095 into any of the user interface of FIGS. 9D-9G as a composite image 908. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio teaches at FIGS. 9A-9L and Paragraph 0089-0099 obtaining, via an interaction with at least one of the plurality of target skin tones presented as shown in FIGS. 9D-9H in response to the user selection of the foundation color in the menu 918d of FIG. 9D or the user selection of eyeshadow color in the menu 918d of FIG. 9E or the user selection of the eye shadow color in the menu 918f of FIG. 9F or the user selection of the lip balm color in the menu 918g. The selected lip color and/or foundation color and/or the eyeshadow color of any of FIGS. 9D-9F allowing the relevant beauty products to be applied to the user face image 908 to show the target skin tone via the user’s interaction with the colors of palette 918d for the associated beauty product 916d of FIG. 9D to be applied to the user face image 908 or the colors of palette 918e for the associated beauty product in the menu 916e of FIG. 9E to be applied to the user face image 908 to provide the target skin tone or the colors of palette 918f of FIG. 9F for the associated beauty product in the menu 916f to be applied to the user face image 908 or the colors of palette 918g for the associated beauty product in 916g to be applied to the user face image 908); Processing, using a trained machine learning model, a first plurality of content items stored and maintained in a data store to determine a respective dominant skin tone of a respective face represented in a respective content item of the first plurality of content items, wherein determining the respective dominant skin tones includes: Determining a region in the respective content item corresponding to the respective face represented in the respective content item; and Determining color values for the region corresponding to the respective face represented in the respective content item, the color values corresponding to the respective dominant skin tone of the face represented in the respective content item ( Odizzio teaches at Paragraph 0094 that a look is associated with one or more makeup products applied and at Paragraph 0095 that the characteristics of the recommended look include certain key features (eyes, nose, mouth), hair color and texture and skin tone. Matching the characteristics of the recommended look to the characteristics of the face depicted in the loaded facial image means the skin tone (eyes/nose/mouth/hair) of the recommended look matches the face depicted in the loaded image and the recommended look includes one or more virtual makeup product applied. Accordingly, determining a recommended look includes determining eye/nose/mouth region of the recommended look matching to the face depicted in the loaded face image and determining the skin tone and primary color values of eye/nose/mouth region with the applied the foundation makeup product. Moreover, determining the recommended look by selecting the particular look in FIG. 9J and Paragraph 0097 (with the applied eyeshadow product further selected) includes determining a region (e.g., an eye region) of the recommended look where the eyeshadow product has been determined for the eye region of the recommended look and the selected eyeshadow product in FIG. 9J determines a range of colors (primary or secondary color) applied to the eye region. Moreover, determining the recommended look by selecting the recommended look with the applied foundation product (selected in FIG. 9J) includes determining a region (e.g., an entire facial region) of the recommended look where the foundation product has been determined for the eye region of the recommended look and the selected foundation product determines a range of colors (primary or secondary color) applied to the entire facial region (See FIG. 9J and 9L). Odizzio teaches at Paragraph 0064 that lipstick is applied to a lip region and foundation is applied to the entire face. Eyeshadow is applied to regions of skin surrounding the eye. Odizzio teaches determining a region (e.g., eyeshadow region) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the eyeshadow region corresponding to the respective face of the particular look, the color values of the virtual eyeshadow product as applied to the particular look can be found in FIG. 9E. Odizzio teaches determining a region (e.g., entire face) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the entire face corresponding to the respective face of the particular look, the color values of the virtual foundation product as applied to the particular look can be found in FIG. 9D. Odizzio teaches at FIG. 9J and Paragraph 0097 that in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look. Using menu 918j (which shows a virtual eyeshadow product corresponding to the eyeshadow region of the particular look and a virtual foundation product corresponding to general facial region of the particular look showing a dominant color of the particular look), a user can select and apply the virtual makeup products, for example, as previously described. The color values of the eyeshadow product as applied to the eyeshadow region of the particular look are shown in FIG. 9E. The color values of the foundation product as applied to the facial region of the particular look are shown in FIG. 9D is the primary color of the particular look. Odizzio’s facial features or facial characteristics refer to eyes, nose, mouth, hair color and texture, and skin tone in relation to the color values of a facial region in a recommended look to be matched with the corresponding facial region of the loaded face image. Odizzio teaches at Paragraph 0095 that the loaded image may be analyzed by platform 120 (e.g., by using the previously described feature detection processes and/or any other computer vision processes) to determine certain characteristics of the face depicted in the loaded image. For example, analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features (e.g., eyes, nose, mouth, etc.), hair color and texture, and skin tone. Based on the detected characteristics, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. In some embodiments, product recommendations may be based on a user-specific product browsing or selection history. For example, the system may recommend a look that includes a makeup product previously selected by the user. Odizzio teaches at Paragraph 0096 that users may be presented with options to rate makeup products and looks both in general and as applied to their specific facial features (such as lip region or eyelid region, or cheek region or hair region, or mouth region, or nose region as key facial features). Odizzio teaches determining a region (e.g., an eye region to apply the eyeshadow skin tone, a cheek region to apply a foundation skin tone) (e.g., applying bronzer to define the sculpt certain facial features of Paragraph 0096) corresponding to the respective face in the recommended look and determining color values for a facial region corresponding to the respective face in the recommended look, the color values corresponding to the respective primary color (skin tone) of the face represented in the respective recommend look associated with the corresponding virtual makeup product applied to the particular look. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant color (the primary color) of the looks. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio further teaches custom looks can be produced by the user by selecting one or more virtual makeup products. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products (the feature of producing different looks have been described in detail in FIGS. 9D-9H with respect to the selection of the colors. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced look (a composite image) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches at Paragraph 0094 that end users can configure their own custom looks by selecting various combinations of makeup products wherein user interface selections of various makeup products have been shown in FIGS. 9D-9G, Paragraph 0088-0090) and the end users can submit their custom looks to virtual makeup platform 120. Accordingly, the custom looks can be displayed in the menu 916i wherein the custom looks figured by the end users are produced by selecting various combinations of makeup products in FIGS. 9D-9G. The particular combinations of multiple virtual makeup products applied to the user face image 908 in FIGS. 9A-9G can be stored as looks 250. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such as product types and menu 918d includes an option to select from a range of colors, shades in a particular line of foundation products. Odizzio teaches at Paragraph 0090 that the menu 942h may include one or more options to refine or filter a list of available makeup products according to various characteristics, e.g., finish, coverage, texture. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at Paragraph 0089 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 that the selectable looks are shown in the graphical menu 916i and looks displayed in menu 916i may include user-specific recommendations for combinations of makeup products and analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features, hair color and texture and skin tone wherein the skin tone of the displayed looks in the menu 916i matches the skin tone of the composite facial image in the area 908); Determining, based at least in part on the first target skin tone, a second plurality of content items from the first plurality of content items, where the respective dominant skin tone associated with the respective faces represented in the second plurality of content items included the first target skin tone ( The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant skin color. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches identifying at least one recommended look or the user custom look from the plurality of recommended looks or custom looks in the library in FIGS. 9I-9J with its skin tone matching with that of the loaded facial image in the area 908. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products. Accordingly, Odizzio made it clear that the different virtual makeup looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced virtual makeup look (a composite facial image with virtual makeup products) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics (dominant skin tone/shade/color) of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (second plurality of content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone, wherein identification of the at least one additional look includes); Causing, via a second user interface, the second plurality of content items to be presented on the client device as responsive to the query ( Odizzio teaches loading the composite facial image with target makeup having a target skin tone to query for the recommended looks (content items) applied with one or more virtual makeup product(s) having the primary skin tone/color. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant skin color. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. Odizzio teaches at FIG. 9I and Paragraph 0095 that the looks displayed via menu 916i may include user-specific recommendations for combinations of makeup products and the platform 120 may generate one or more recommended looks that specifically tailored to the characteristics (target skin tone) of the face in the loaded image. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 causing the recommended looks (meeting the second plurality of content items) to be presented on the client device 102 in responsive to the user interaction query via the selection menus). The examiner maps the skin colors on a user interface such as the face colors 102-109 in the menu 918d of FIG. 9D, eyeshadow color 01-05 in the menu 918e of FIG. 9E or lip colors 01-06 in the menus 918g or lips colors in the area 1106 of FIGS. 11A-11B to the claimed target skin tones to be presented on a client device. The selected color in the menu 918d of FIG. 9E is the eyeshadow color 03 is mapped to the first target skin tone for the eyeshadow skin. Similarly, the face color 103 can also be selected in menu 918d of FIG. 9D for the face foundation skin to filter a plurality of virtual makeup products. In FIG. 9G, the lip color 03 is selected to be a first target skin tone for the lip skin. Moreover, the skin color of each of the virtual makeup products (looks) corresponds to the skin color of each look disclosed in FIGS. 9A-9I because each look is constructed based on each virtual makeup product. Sartori Odizzio teaches at FIG. 9I in association with the first selected target skin color/tone in FIGS. 9D-9G the looks (the facial images with the applied virtual makeup products) by selecting “looks” in the menu 906 of FIG. 9D the first plurality of images (looks) tied to the applied first plurality of virtual makeup images can be shown in FIG. 9I. Moreover, the second plurality of virtual makeup foundation products can be generated by a selecting a facial foundation color or tone in 918d and further by selecting Looks in the menu 906 of FIG. 9d after selecting the facial foundation color or tone in 918d, a second plurality of images can be generated in the menu 916i of FIG. 9I. By selecting “looks” in the menu 906 of FIG. 9E, the first plurality of images (looks) tied to the applied first plurality of virtual makeup images can be shown in FIG. 9I. By selecting “looks” in the menu 906 of FIG. 9F, the second plurality of images (looks) tied to the applied second plurality of virtual makeup images can be shown in the menu 916i of FIG. 9I. Similarly, selecting looks in the menu 906 of FIG. 9G, the first plurality of images (looks) tied to the applied first plurality of virtual lipstick makeup images can be shown in the menu 916i of FIG. 9I. By selecting looks in the menu 906 of FIG. 9H, the second plurality of images (looks) tied to the applied second plurality of virtual lipstick makeup images can be shown in the menu 916i of FIG. 9I. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4-7, 10-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sartori Odizzio et al. US-PGPUB No. 2018/0075524 (hereinafter Sartori Odizzio) in view of Barron et al. US-PGPUB No. 2021/0390311 (hereinafter Barron-provisional based on the provisional application 62/705,076’s filing date). Re Claim 1: Sartori Odizzio teaches: Obtaining a query for a plurality of content items ( Odizzio teaches obtaining a query for a plurality of virtual makeup looks (e.g., looks 250). Odizzio teaches loading the composite facial image with target makeup having a target skin tone to query for the recommended looks (content items) applied with one or more virtual makeup product(s) having the primary skin tone/color. Odizzio teaches at FIGS. 9A-9J and Paragraph 0094-0097 that the user interface obtains a query for a plurality of virtual makeup looks. Odizzio teaches at Paragraph 0044 that particular combinations of multiple virtual makeup products may be stored as looks 250 and at Paragraph 0094 that FIG. 9j shows a screen capture 900j of the example graphical user interface showing selectable “Looks” via a graphical menu 916i in response to a user input selecting the “Looks” category via menu 906. It is noted that each Look is a facial image having a characteristics such as a skin tone as disclosed at Paragraph 0095 “platform 120 may generate one or more recommend looks that are specifically tailed to the characteristics of the face of the loaded image”. The loaded image is the image 908 of FIG. 9J); Causing, via a first user interface, a plurality of target skin tones to be presented on a client device associated with a user, the plurality of target skin tones defined by a plurality of predefined color value thresholds ( Odizzio’s range of colors correspond to the claimed predefined color value thresholds. A range of colors means [r1, r2] where r1 represents a lower threshold and r2 represents an upper threshold of color values. With respect to the new claim limitation as highlighted amended after the Board Decision rendered 9/17/2025, Odizzio teaches at Paragraph 0088-0089 that as shown in screen 900d of FIG. 9D, menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. As shown at screen capture 900g in FIG. 9G a user has selected “lips” via menu 906 and “lipstick” via menu 906g. In response the user is presented with a graphical element 916g showing at least one virtual lipstick product or product line and graphical menu 918g including an option to select from a range of colors for a particular line of lipstick products. Odizzio teaches at Paragraph 0094 that a look is a particular combination of two or more makeup products and end users can configure their own custom looks by selecting various combinations of makeup products. Since each makeup product is associated with one of the target skin tones as disclosed at FIGS. 9A-9J, the custom looks are associated with the target skin tones. The skin tone relating to the composite facial image 908 may be selected via the user interaction in any one of the user interfaces of FIGS. 9A-9J. For example, Odizzio teaches at Paragraph 0091 in response to receiving a makeup product selection and a request to apply the selected product, a composite image 908 including a base image and a makeup image based on the selected product may be displayed to the user via the graphical user interface (Odizzio teaches at Paragraph 0088-0089 selecting a color of the virtual lipstick products or the virtual eyeshadow products or the virtual foundation products so that the selected virtual makeup products can be applied via the user interface to change the skin tone of the composite image 908). Odizzio teaches at Paragraph 0093 changing a skin tone of the composite image 908 via the user interaction in the user interface of FIG. 9A (which also applies to the user interface of FIG. 9I or 9J for generating the additional recommended looks). Odizzio teaches that if the user does not like the selected makeup product as applied (resulting in a first skin tone of the facial image 908) or wishes to try a different makeup product (with a second skin tone), the user can elect to remove the applied makeup product by selecting remover option 950. Odizzio remover option 950 may act as a reset button, for example, removing all or some of the makeup images forming the composite image 908 in response to a user selection and the user can use the interactive tool to finely remove specific areas of virtually applied makeup product (thus changing the skin tone of the composite image 908). The user interaction tool such as the filter tool 940 or remover tool 950 of FIG. 9A is also shown in FIG. 9D-9H. Accordingly the skin tone of the composite image 908 (a look) can be changed via the user interaction by changing one of the eyeshadow color, lip color and facial (cheek) foundation color. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced custom look (a composite image) can be loaded as a load image disclosed in Paragraph 0095 into any of the user interface of FIGS. 9D-9G as a composite image 908. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio teaches at FIGS. 9A-9L and Paragraph 0089-0099 obtaining, via an interaction with at least one of the plurality of target skin tones presented as shown in FIGS. 9D-9H in response to the user selection of the foundation color in the menu 918d of FIG. 9D or the user selection of eyeshadow color in the menu 918d of FIG. 9E or the user selection of the eye shadow color in the menu 918f of FIG. 9F or the user selection of the lip balm color in the menu 918g. The selected lip color and/or foundation color and/or the eyeshadow color of any of FIGS. 9D-9F allowing the relevant beauty products to be applied to the user face image 908 to show the target skin tone via the user’s interaction with the colors of palette 918d for the associated beauty product 916d of FIG. 9D to be applied to the user face image 908 or the colors of palette 918e for the associated beauty product in the menu 916e of FIG. 9E to be applied to the user face image 908 to provide the target skin tone or the colors of palette 918f of FIG. 9F for the associated beauty product in the menu 916f to be applied to the user face image 908 or the colors of palette 918g for the associated beauty product in 916g to be applied to the user face image 908); Processing, using a trained machine learning model, a first plurality of content items stored and maintained in a data store to determine a respective dominant skin tone of a respective face represented in a respective content item of the first plurality of content items, wherein determining the respective dominant skin tones includes: Determining a region in the respective content item corresponding to the respective face represented in the respective content item; and Determining color values for the region corresponding to the respective face represented in the respective content item, the color values corresponding to the respective dominant skin tone of the face represented in the respective content item ( Odizzio teaches at Paragraph 0094 that a look is associated with one or more makeup products applied and at Paragraph 0095 that the characteristics of the recommended look include certain key features (eyes, nose, mouth), hair color and texture and skin tone. Matching the characteristics of the recommended look to the characteristics of the face depicted in the loaded facial image means the skin tone (eyes/nose/mouth/hair) of the recommended look matches the face depicted in the loaded image and the recommended look includes one or more virtual makeup product applied. Accordingly, determining a recommended look includes determining eye/nose/mouth region of the recommended look matching to the face depicted in the loaded face image and determining the skin tone and primary color values of eye/nose/mouth region with the applied the foundation makeup product. Moreover, determining the recommended look by selecting the particular look in FIG. 9J and Paragraph 0097 (with the applied eyeshadow product further selected) includes determining a region (e.g., an eye region) of the recommended look where the eyeshadow product has been determined for the eye region of the recommended look and the selected eyeshadow product in FIG. 9J determines a range of colors (primary or secondary color) applied to the eye region. Moreover, determining the recommended look by selecting the recommended look with the applied foundation product (selected in FIG. 9J) includes determining a region (e.g., an entire facial region) of the recommended look where the foundation product has been determined for the eye region of the recommended look and the selected foundation product determines a range of colors (primary or secondary color) applied to the entire facial region (See FIG. 9J and 9L). Odizzio teaches at Paragraph 0064 that lipstick is applied to a lip region and foundation is applied to the entire face. Eyeshadow is applied to regions of skin surrounding the eye. Odizzio teaches determining a region (e.g., eyeshadow region) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the eyeshadow region corresponding to the respective face of the particular look, the color values of the virtual eyeshadow product as applied to the particular look can be found in FIG. 9E. Odizzio teaches determining a region (e.g., entire face) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the entire face corresponding to the respective face of the particular look, the color values of the virtual foundation product as applied to the particular look can be found in FIG. 9D. Odizzio teaches at FIG. 9J and Paragraph 0097 that in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look. Using menu 918j (which shows a virtual eyeshadow product corresponding to the eyeshadow region of the particular look and a virtual foundation product corresponding to general facial region of the particular look showing a dominant color of the particular look), a user can select and apply the virtual makeup products, for example, as previously described. The color values of the eyeshadow product as applied to the eyeshadow region of the particular look are shown in FIG. 9E. The color values of the foundation product as applied to the facial region of the particular look are shown in FIG. 9D is the primary color of the particular look. Odizzio’s facial features or facial characteristics refer to eyes, nose, mouth, hair color and texture, and skin tone in relation to the color values of a facial region in a recommended look to be matched with the corresponding facial region of the loaded face image. Odizzio teaches at Paragraph 0095 that the loaded image may be analyzed by platform 120 (e.g., by using the previously described feature detection processes and/or any other computer vision processes) to determine certain characteristics of the face depicted in the loaded image. For example, analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features (e.g., eyes, nose, mouth, etc.), hair color and texture, and skin tone. Based on the detected characteristics, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. In some embodiments, product recommendations may be based on a user-specific product browsing or selection history. For example, the system may recommend a look that includes a makeup product previously selected by the user. Odizzio teaches at Paragraph 0096 that users may be presented with options to rate makeup products and looks both in general and as applied to their specific facial features (such as lip region or eyelid region, or cheek region or hair region, or mouth region, or nose region as key facial features). Odizzio teaches determining a region (e.g., an eye region to apply the eyeshadow skin tone, a cheek region to apply a foundation skin tone) (e.g., applying bronzer to define the sculpt certain facial features of Paragraph 0096) corresponding to the respective face in the recommended look and determining color values for a facial region corresponding to the respective face in the recommended look, the color values corresponding to the respective primary color (skin tone) of the face represented in the respective recommend look associated with the corresponding virtual makeup product applied to the particular look. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant color (the primary color) of the looks. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio further teaches custom looks can be produced by the user by selecting one or more virtual makeup products. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products (the feature of producing different looks have been described in detail in FIGS. 9D-9H with respect to the selection of the colors. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced look (a composite image) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches at Paragraph 0094 that end users can configure their own custom looks by selecting various combinations of makeup products wherein user interface selections of various makeup products have been shown in FIGS. 9D-9G, Paragraph 0088-0090) and the end users can submit their custom looks to virtual makeup platform 120. Accordingly, the custom looks can be displayed in the menu 916i wherein the custom looks figured by the end users are produced by selecting various combinations of makeup products in FIGS. 9D-9G. The particular combinations of multiple virtual makeup products applied to the user face image 908 in FIGS. 9A-9G can be stored as looks 250. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such as product types and menu 918d includes an option to select from a range of colors, shades in a particular line of foundation products. Odizzio teaches at Paragraph 0090 that the menu 942h may include one or more options to refine or filter a list of available makeup products according to various characteristics, e.g., finish, coverage, texture. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at Paragraph 0089 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 that the selectable looks are shown in the graphical menu 916i and looks displayed in menu 916i may include user-specific recommendations for combinations of makeup products and analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features, hair color and texture and skin tone wherein the skin tone of the displayed looks in the menu 916i matches the skin tone of the composite facial image in the area 908); Determining, based at least in part on the first target skin tone, a second plurality of content items from the first plurality of content items, where the respective dominant skin tone associated with the respective faces represented in the second plurality of content items included the first target skin tone ( The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant skin color. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches identifying at least one recommended look or the user custom look from the plurality of recommended looks or custom looks in the library in FIGS. 9I-9J with its skin tone matching with that of the loaded facial image in the area 908. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products. Accordingly, Odizzio made it clear that the different virtual makeup looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced virtual makeup look (a composite facial image with virtual makeup products) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics (dominant skin tone/shade/color) of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (second plurality of content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone, wherein identification of the at least one additional look includes); Causing, via a second user interface, the second plurality of content items to be presented on the client device as responsive to the query ( Odizzio teaches loading the composite facial image with target makeup having a target skin tone to query for the recommended looks (content items) applied with one or more virtual makeup product(s) having the primary skin tone/color. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant skin color. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. Odizzio teaches at FIG. 9I and Paragraph 0095 that the looks displayed via menu 916i may include user-specific recommendations for combinations of makeup products and the platform 120 may generate one or more recommended looks that specifically tailored to the characteristics (target skin tone) of the face in the loaded image. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 causing the recommended looks (meeting the second plurality of content items) to be presented on the client device 102 in responsive to the user interaction query via the selection menus). For the above reasons, Odizzio implicitly teaches at Paragraph 0088-0089 the claimed using a trained machine learning model. Barron-provisional explicitly teaches the claimed using a trained machine learning model. For example, Barron-provisional teaches at Paragraph 0066 that the determine effects module 128 uses deep learning that indicates changes (skin tone changes) to the live images 134 that should be made based on the AR tutorial video 228....AR effects 219 are determined based on beauty product information 304 that indicates changes that the beauty product 224 of beauty product data 302 will make to the body part 222, 308. For example, a color (skin color or skin tone) such as color 316 may be indicated as the change that is made to the user 238 from the application of the beauty product 224. AR effects 219 may be determined based on the color and an area of body part 222 or body part 308 to apply the color to the live image 134 of the user 238. AR effects 219 are determined based on skin tone where a skin tone of the user 238 is determined and then the application of the beauty product 224 is determined based on the skin tone of the user 238 and at Paragraph 0064 that the determine body part module 124 uses a neural network that is trained to identify different body parts from an image of a human body...may use other information to determine which body part 222 is having the beauty product 224 applied. The determine body part 124 may determine that an eye region has changed colors in an AR tutorial video 228. Accordingly, the skin tone of the body part associated with a particular beauty product 224 is determined by the determine body part module 124. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316 ...stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236...presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B (first plurality of content items) ....the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224....retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B (second plurality of content items) ... the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224. It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have applied trained machine learning model such as a trained neural network model for identifying a skin tone of a virtual makeup product being applied to the person image as taught in Barron-provisional to have modified the analysis module or the machine learning model of Odizzio at Paragraph 0095-0096 to have analyzed/identified by the machine learning model the skin tone/color characteristics of each virtual makeup look as well as the skin tone/color characteristics of the composite facial image 908 so as to have provided recommended looks. One of the ordinary skill in the art would have been motivated to have used a trained machine learning model as an analysis/classification module/tool for identifying the characteristics of the virtual makeup looks so as to obtain the recommend looks with the skin tone characteristics matching with the skin tone characteristics of the composite facial image 908 applied with one or more virtual makeup products selected by the one or more target skin tones of FIGS. 9A-9J. Therefore, the claim invention is met by combination of Odizzio and Barron-provisional. Re Claim 2: The claim 2 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that processing the first plurality of content items includes: determining, using the trained machine learning model and for the respective faces represented in the first plurality of content items, a respective dominant skin tone value; determining dominant skin tone for the respective faces represented in the first plurality of content items using the trained machine learning model and based at least in part on the respective dominant skin tone value and the plurality of predefined color value thresholds. Odizzio teaches the claim limitation: processing the first plurality of content items includes: determining, using the trained machine learning model and for the respective faces represented in the first plurality of content items, a respective dominant skin tone value; determining dominant skin tone for the respective faces represented in the first plurality of content items using the trained machine learning model and based at least in part on the respective dominant skin tone value and the plurality of predefined color value thresholds ( Odizzio’s range of colors correspond to the claimed predefined color value thresholds. A range of colors means [r1, r2] where r1 represents a lower threshold and r2 represents an upper threshold of color values. With respect to the new claim limitation as highlighted amended after the Board Decision rendered 9/17/2025, Odizzio teaches at Paragraph 0088-0089 that as shown in screen 900d of FIG. 9D, menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. As shown at screen capture 900g in FIG. 9G a user has selected “lips” via menu 906 and “lipstick” via menu 906g. In response the user is presented with a graphical element 916g showing at least one virtual lipstick product or product line and graphical menu 918g including an option to select from a range of colors for a particular line of lipstick products. The claimed target skin tone is interpreted in the same manner as appellant’s specification at FIG. 5 and Paragraph 0041 (namely the color classes 502-504-506-508 of FIG. 5 associated with the content items wherein the content items may refer to the virtual cosmetic product items or virtual looks (the composite facial images applied with the selected virtual makeup products with the selected colors—skin tones). The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Such primary color of the virtual products included in the looks is selected by the user via the user interface of FIGS. 9A-9H when generating each of the looks (the composite facial images). Each of the recommended looks displayed in the menu 916i of FIG. 9i has the dominant/primary skin color, which is based on the primary color of its associated virtual makeup product. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. Odizzio discloses at each of FIGS. 9D-9G and Paragraph 0088-000089 that menu 918d includes an option to select from a range of colors (skin tones), shade in a particular line of foundation and menu 918g includes an option to select from a range of colors (lip skin colors or lip skin tones) for a particular line of lipstick products to be applied to the lips (lip skin) in the facial image 908. The range of colors of FIGS. 9D-9G are presented in the same manner as applicant’s FIG. 5’s range of colors 502-508 in appellant’s specification. For example, in FIG. 9F, Odizzio teaches 5-color eye shadow palette with 5-classes of colors ranging from dark color to medium and light color and thus has taught the eyeshadow skin colors. The virtual lipstick product has a primary lip color (primary skin color). The virtual lipstick product when applied to the composite facial image 908 is then saved to the database as a look (Paragraph 0039) so that the saved look has also a primary skin color. Similarly, in FIG. 9G, Odizzio teaches 6-color lipstick palette ranging from dark color to medium color and light color and therefore has taught the lip skin colors. In FIG. 9H, Odizzio teaches 5-color eye shadow palette with 5-classes of eye shadow skin colors ranging from dark color to medium and light color (associated with eyeshadow skin). The virtual eyeshadow product has a primary eyeshadow skin color. The virtual eyeshadow product when applied to the composite facial image 908 is then saved to the database as a look (Paragraph 0039) so that the saved look has also a primary skin color. Moreover, Odizzio teaches at FIGS. 9D-9G and Paragraph 0088-0089 that, in response, the user is presented with a graphical menu 916d showing at least one virtual foundation (cheek) product or product-line….in response, the user is presented with a graphical element 916f showing a selected virtual makeup product that includes a pallet of five colors (cheek skin colors)….in response the user is presented with a graphical element 916g showing at least one virtual lipstick product or product line. The virtual foundation product has the primary cheek color. The virtual foundation product when applied to the composite facial image 908 is then saved to the database as a look (Paragraph 0039) so that the saved look has also a primary skin color. Thus, Odizzio clearly shows at FIGS. 9D-9G the claimed feature of determining that the respective skin tone in the graphical menu 918d-918g associated with the additional content items as the one or more graphical elements within 916d-916g corresponding to the selected skin tone (the range of colors selected in the graphical menu 918d-918g of FIGS. 9D-9G). ). Barron-provisional teaches the claim limitation: processing the first plurality of content items includes: determining, using the trained machine learning model and for the respective faces represented in the first plurality of content items, a respective dominant skin tone value; determining dominant skin tone for the respective faces represented in the first plurality of content items using the trained machine learning model and based at least in part on the respective dominant skin tone value and the plurality of predefined color value thresholds ( Barron-provisional teaches at Paragraph 0085 stored images of the beauty product 224 may be used for identifying the beauty product 224…variations 314 include color 316 and variations 314 may include a selection menu 320 that assists a user 238 in selecting variations 314 of the beauty product data 302 and at Paragraph 0124 that current color 1508 indicates a current selection of a variation of the beauty product 224B such as color 316. Barron-provisional teaches at Paragraph 0243 that the identify product code module 3408 may include a neural network trained with deep learning to identify the product code 3610 and at Paragraph 0125 that current color 1508 indicates a current selection of a variation of the beauty product 224B such as color 316 of FIG. 3 wherein FIG. 15 shows classes of color variations of the beauty product 224B with threshold colors. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0066 that the determine effects module 128 uses deep learning that indicates changes (skin tone changes) to the live images 134 that should be made based on the AR tutorial video 228….AR effects 219 are determined based on beauty product information 304 that indicates changes that the beauty product 224 of beauty product data 302 will make to the body part 222, 308. For example, a color (skin color or skin tone) such as color 316 may be indicated as the change that is made to the user 238 from the application of the beauty product 224. AR effects 219 may be determined based on the color and an area of body part 222 or body part 308 to apply the color to the live image 134 of the user 238. AR effects 219 are determined based on skin tone where a skin tone of the user 238 is determined and then the application of the beauty product 224 is determined based on the skin tone of the user 238 and at Paragraph 0064 that the determine body part module 124 uses a neural network that is trained to identify different body parts from an image of a human body…may use other information to determine which body part 222 is having the beauty product 224 applied. The determine body part 124 may determine that an eye region has changed colors in an AR tutorial video 228. Accordingly, the skin tone of the body part associated with a particular beauty product 224 is determined by the determine body part module 124. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. ). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Barron-provisional’s machine learning models to automatically recognize particular features such as the skin tones in the images when the images in the image repository are indexed using the skin tones as keywords for image retrieval to be incorporated into the system and method of Sartori Odizzio to query the looks (the face images) in the database directly based on the skin tones. One of the ordinary skill in the art would have used the skin tones as keywords for indexing the images. Re Claim 4: The claim 4 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that obtaining, from the client device, a content item having a visual representation of at least a portion of a body part; and processing, using the trained machine learning model, the content item to identify the portion of the body part as a region of interest in the content item and to determine a first dominant skin tone associated with the region of interest; and causing, via a third user interface, the first dominant skin tone to be presented on the client device. Odizzio further teaches the claim limitation that obtaining, from the client device, a content item having a visual representation of at least a portion of a body part; and processing, using the trained machine learning model, the content item to identify the portion of the body part as a region of interest in the content item and to determine a first dominant skin tone associated with the region of interest; and causing, via a third user interface, the first dominant skin tone to be presented on the client device ( Odizzio teaches at Paragraph 0064 that lipstick is applied to a lip region and foundation is applied to the entire face. Eyeshadow is applied to regions of skin surrounding the eye. Odizzio teaches determining a region (e.g., eyeshadow region) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the eyeshadow region corresponding to the respective face of the particular look, the color values of the virtual eyeshadow product as applied to the particular look can be found in FIG. 9E. Odizzio teaches determining a region (e.g., entire face) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the entire face corresponding to the respective face of the particular look, the color values of the virtual foundation product as applied to the particular look can be found in FIG. 9D. Odizzio teaches at FIG. 9J and Paragraph 0097 that in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look. Using menu 918j (shown a virtual eyeshadow product corresponding to the eyeshadow region of the particular look and a virtual foundation product corresponding to general facial region of the particular look), a user can select and apply the virtual makeup products, for example, as previously described. The color values of the eyeshadow product as applied to the eyeshadow region of the particular look are shown in FIG. 9E. The color values of the foundation product as applied to the facial region of the particular look are shown in FIG. 9D. Odizzio’s facial features or facial characteristics refer to eyes, nose, mouth, hair color and texture, and skin tone in relation to the color values of a facial region in a recommended look to be matched with the corresponding facial region of the loaded face image. Odizzio teaches at Paragraph 0095 that the loaded image may be analyzed by platform 120 (e.g., by using the previously described feature detection processes and/or any other computer vision processes) to determine certain characteristics of the face depicted in the loaded image. For example, analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features (e.g., eyes, nose, mouth, etc.), hair color and texture, and skin tone. Based on the detected characteristics, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. In some embodiments, product recommendations may be based on a user-specific product browsing or selection history. For example, the system may recommend a look that includes a makeup product previously selected by the user. Odizzio teaches at Paragraph 0096 that users may be presented with options to rate makeup products and looks both in general and as applied to their specific facial features (such as lip region or eyelid region, or cheek region or hair region, or mouth region, or nose region as key facial features). Odizzio teaches determining a region (e.g., an eye region to apply the eyeshadow skin tone, a cheek region to apply a foundation skin tone) (e.g., applying bronzer to define the sculpt certain facial features of Paragraph 0096) corresponding to the respective face in the recommended look and determining color values for a facial region corresponding to the respective face in the recommended look, the color values corresponding to the respective primary color (skin tone) of the face represented in the respective recommend look associated with the corresponding virtual makeup product applied to the particular look. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant color (the primary color) of the looks. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio further teaches custom looks can be produced by the user by selecting one or more virtual makeup products. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products (the feature of producing different looks have been described in detail in FIGS. 9D-9H with respect to the selection of the colors. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced look (a composite image) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches at Paragraph 0094 that end users can configure their own custom looks by selecting various combinations of makeup products wherein user interface selections of various makeup products have been shown in FIGS. 9D-9G, Paragraph 0088-0090) and the end users can submit their custom looks to virtual makeup platform 120. Accordingly, the custom looks can be displayed in the menu 916i wherein the custom looks figured by the end users are produced by selecting various combinations of makeup products in FIGS. 9D-9G. The particular combinations of multiple virtual makeup products applied to the user face image 908 in FIGS. 9A-9G can be stored as looks 250. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such as product types and menu 918d includes an option to select from a range of colors, shades in a particular line of foundation products. Odizzio teaches at Paragraph 0090 that the menu 942h may include one or more options to refine or filter a list of available makeup products according to various characteristics, e.g., finish, coverage, texture. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at Paragraph 0089 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 that the selectable looks are shown in the graphical menu 916i and looks displayed in menu 916i may include user-specific recommendations for combinations of makeup products and analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features, hair color and texture and skin tone wherein the skin tone of the displayed looks in the menu 916i matches the skin tone of the composite facial image in the area 908). Barron-provisional teaches the claim limitation that obtaining, from the client device, a content item having a visual representation of at least a portion of a body part; and processing, using the trained machine learning model, the content item to identify the portion of the body part as a region of interest in the content item and to determine a first dominant skin tone associated with the region of interest; and causing, via a third user interface, the first dominant skin tone to be presented on the client device ( Barron-provisional teaches at Paragraph 0066 that the determine effects module 128 uses deep learning that indicates changes (skin tone changes) to the live images 134 that should be made based on the AR tutorial video 228….AR effects 219 are determined based on beauty product information 304 that indicates changes that the beauty product 224 of beauty product data 302 will make to the body part 222, 308. For example, a color (skin color or skin tone) such as color 316 may be indicated as the change that is made to the user 238 from the application of the beauty product 224. AR effects 219 may be determined based on the color and an area of body part 222 or body part 308 to apply the color to the live image 134 of the user 238. AR effects 219 are determined based on skin tone where a skin tone of the user 238 is determined and then the application of the beauty product 224 is determined based on the skin tone of the user 238 and at Paragraph 0064 that the determine body part module 124 uses a neural network that is trained to identify different body parts from an image of a human body…may use other information to determine which body part 222 is having the beauty product 224 applied. The determine body part 124 may determine that an eye region has changed colors in an AR tutorial video 228. Accordingly, the skin tone of the body part associated with a particular beauty product 224 is determined by the determine body part module 124. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Barron-provisional’s machine learning models to automatically recognize particular features such as the skin tones in the images when the images in the image repository are indexed using the skin tones as keywords for image retrieval to be incorporated into the system and method of Sartori Odizzio to query the looks (the face images) in the database directly based on the skin tones. One of the ordinary skill in the art would have used the skin tones as keywords for indexing the images. Re Claim 5: The claim 5 encompasses the same scope of invention as that of the claim 4 except additional claim limitation that the content item is captured by a camera associated with the client device. However, Sartori Odizzio further teaches the claim limitation that the content item is captured by a camera associated with the client device ( Sartori Odizzio teaches at Paragraph 0042 that image capture module 234 may include the software and/or hardware for capturing images at a client device for virtual makeup application…may include a digital camera and at Paragraph 0077 that makeup images may be generated and composited with base images captured at different vantage points. Srtori Odizzio teaches at Paragraph 0087 the user may be presented with screen 900b of FIG. 9B that includes an option 912b to “snap a selfie” and option 914b to choose a model….a user may be allowed to capture an image via an image capture device associated with the client device 102….the selected image may be from the user’s own photo library and at Paragraph 0090 that a composite image 908 includes a base image of a human face and a makeup image based on the selected product may be displayed to the user and at Paragraph 0095 that a user may load an image of their face). Re Claim 5: The claim 5 encompasses the same scope of invention as that of the claim 4 except additional claim limitation that the content item is captured by a camera associated with the client device. However, Odizzio further teaches the claim limitation that the content item is captured by a camera associated with the client device ( Odizzio teaches at Paragraph 0087 that FIG. 9B includes an option 912b to snap a selfie. Odizzio teaches at Paragraph 0091 in response to receiving a makeup product selection and a request to apply the selected product, a composite image 908 including a base image of a human face and a makeup image based on the selected product may be displayed to the user and at Paragraph 0094-0096 that selectable “Looks” are displayed in response to a user input selecting the “Looks” category…end users can configure their own custom looks by selecting various combinations of makeup products and can submit their custom looks to virtual makeup platform 120 and at Paragraph 0097 that the graphical user interface is displayed in response to a user selection of a particular look via menu 916i…in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look). Re Claim 6: The claim 6 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that: receiving, from the client device, a content item having a visual representation of an overall beauty aesthetic, the overall beauty aesthetic including at least one area within the visual representation having an application of at least one beauty product; determining a region of interest in the visual representation, the region of interest corresponding to the at least one area having the application of the at least one beauty product; extracting, from the region of interest, at least one product parameter associated with the at least one beauty product that contributes to the overall beauty aesthetic; identifying at least one additional content item from the second plurality of content items based at least in part on the at least one product parameter; and providing for presentation, on the display of the client device, the at least one additional content item. However, Odizzio further teaches the claim limitation that receiving, from the client device, a content item having a visual representation of an overall beauty aesthetic, the overall beauty aesthetic including at least one area within the visual representation having an application of at least one beauty product; determining a region of interest in the visual representation, the region of interest corresponding to the at least one area having the application of the at least one beauty product; extracting, from the region of interest, at least one product parameter associated with the at least one beauty product that contributes to the overall beauty aesthetic; identifying at least one additional content item from the second plurality of content items based at least in part on the at least one product parameter; and providing for presentation, on the display of the client device, the at least one additional content item ( Odizzio teaches at Paragraph 0064 that lipstick is applied to a lip region and foundation is applied to the entire face. Eyeshadow is applied to regions of skin surrounding the eye. Odizzio teaches determining a region (e.g., eyeshadow region) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the eyeshadow region corresponding to the respective face of the particular look, the color values of the virtual eyeshadow product as applied to the particular look can be found in FIG. 9E. Odizzio teaches determining a region (e.g., entire face) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the entire face corresponding to the respective face of the particular look, the color values of the virtual foundation product as applied to the particular look can be found in FIG. 9D. Odizzio teaches at FIG. 9J and Paragraph 0097 that in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look. Using menu 918j (shown a virtual eyeshadow product corresponding to the eyeshadow region of the particular look and a virtual foundation product corresponding to general facial region of the particular look), a user can select and apply the virtual makeup products, for example, as previously described. The color values of the eyeshadow product as applied to the eyeshadow region of the particular look are shown in FIG. 9E. The color values of the foundation product as applied to the facial region of the particular look are shown in FIG. 9D. Odizzio’s facial features or facial characteristics refer to eyes, nose, mouth, hair color and texture, and skin tone in relation to the color values of a facial region in a recommended look to be matched with the corresponding facial region of the loaded face image. Odizzio teaches at Paragraph 0095 that the loaded image may be analyzed by platform 120 (e.g., by using the previously described feature detection processes and/or any other computer vision processes) to determine certain characteristics of the face depicted in the loaded image. For example, analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features (e.g., eyes, nose, mouth, etc.), hair color and texture, and skin tone. Based on the detected characteristics, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. In some embodiments, product recommendations may be based on a user-specific product browsing or selection history. For example, the system may recommend a look that includes a makeup product previously selected by the user. Odizzio teaches at Paragraph 0096 that users may be presented with options to rate makeup products and looks both in general and as applied to their specific facial features (such as lip region or eyelid region, or cheek region or hair region, or mouth region, or nose region as key facial features). Odizzio teaches determining a region (e.g., an eye region to apply the eyeshadow skin tone, a cheek region to apply a foundation skin tone) (e.g., applying bronzer to define the sculpt certain facial features of Paragraph 0096) corresponding to the respective face in the recommended look and determining color values for a facial region corresponding to the respective face in the recommended look, the color values corresponding to the respective primary color (skin tone) of the face represented in the respective recommend look associated with the corresponding virtual makeup product applied to the particular look. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant color (the primary color) of the looks. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio further teaches custom looks can be produced by the user by selecting one or more virtual makeup products. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products (the feature of producing different looks have been described in detail in FIGS. 9D-9H with respect to the selection of the colors. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced look (a composite image) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches at Paragraph 0094 that end users can configure their own custom looks by selecting various combinations of makeup products wherein user interface selections of various makeup products have been shown in FIGS. 9D-9G, Paragraph 0088-0090) and the end users can submit their custom looks to virtual makeup platform 120. Accordingly, the custom looks can be displayed in the menu 916i wherein the custom looks figured by the end users are produced by selecting various combinations of makeup products in FIGS. 9D-9G. The particular combinations of multiple virtual makeup products applied to the user face image 908 in FIGS. 9A-9G can be stored as looks 250. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such as product types and menu 918d includes an option to select from a range of colors, shades in a particular line of foundation products. Odizzio teaches at Paragraph 0090 that the menu 942h may include one or more options to refine or filter a list of available makeup products according to various characteristics, e.g., finish, coverage, texture. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at Paragraph 0089 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 that the selectable looks are shown in the graphical menu 916i and looks displayed in menu 916i may include user-specific recommendations for combinations of makeup products and analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features, hair color and texture and skin tone wherein the skin tone of the displayed looks in the menu 916i matches the skin tone of the composite facial image in the area 908). Re Claim 7: The claim 7 is in parallel with the claim 1 in an apparatus form. The claim 7 is subject to the same rationale of rejection as the claim 1. Moreover, Sartori Odizzio teaches a computing system, comprising: one or more processors; and a memory storing program instructions, that when executed by the one or more processors, cause the one or more processors to at least [perform the method steps of the claim 1] ( Sartori Odizzio teaches at Paragraph 0029 the one or more computing devices may include one or more memories that store instructions for implementing the various components described herein, one or more hardware processors configured to execute the instructions stored in the one or more memories): obtain a first plurality of images, each of the first plurality of images including a visual representation of at least a portion of a body part ( Sartori Odizzio teaches at FIG. 9I in association with the first selected target skin color/tone in FIGS. 9D-9G the looks (the facial images with the applied virtual makeup products) by selecting “looks” in the menu 906 of FIG. 9D the first plurality of images (looks) tied to the applied first plurality of virtual makeup images can be shown in FIG. 9I. Moreover, the second plurality of virtual makeup foundation products can be generated by a selecting a facial foundation color or tone in 918d and further by selecting Looks in the menu 906 of FIG. 9d after selecting the facial foundation color or tone in 918d, a second plurality of images can be generated in the menu 916i of FIG. 9I. By selecting “looks” in the menu 906 of FIG. 9E, the first plurality of images (looks) tied to the applied first plurality of virtual makeup images can be shown in FIG. 9I. By selecting “looks” in the menu 906 of FIG. 9F, the second plurality of images (looks) tied to the applied second plurality of virtual makeup images can be shown in the menu 916i of FIG. 9I. Similarly, selecting looks in the menu 906 of FIG. 9G, the first plurality of images (looks) tied to the applied first plurality of virtual lipstick makeup images can be shown in the menu 916i of FIG. 9I. By selecting looks in the menu 906 of FIG. 9H, the second plurality of images (looks) tied to the applied second plurality of virtual lipstick makeup images can be shown in the menu 916i of FIG. 9I. Sartori Odizzio teaches at Paragraph 0095-0096 that the looks displayed via menu 916i may include user-specific recommendations of makeup products…The loaded image may be analyzed by platform 120 to determine certain characteristics of the face depicted in the loaded image….analysis performed on the image may detect…skin tone. Based on the detected characteristics, e.g., skin tone, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. Sartori Odizzio teaches at Paragraph 0094-0096 a graphical user interface showing selectable “Looks” via a graphical menu 916i…menu 916i is displayed showing screen captures of models displaying various predefined “looks”); Re Claim 10: The claim 10 encompasses the same scope of invention as that of the claim 7 except additional claim limitation that the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: obtain, from a client device, a second image captured by a camera associated with the client device and including a second visual representation of at least a second portion of a second face; process, using the trained machine learning model, the second image to identify the second portion of the second face as a second region of interest in the second image and to determine a second dominant skin tone associated with the region of interest; and cause, via a first user interface, the second dominant skin tone to be presented on the client device. Odizzio further teaches the claim limitation that the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: obtain, from a client device, a second image captured by a camera associated with the client device and including a second visual representation of at least a second portion of a second face; process, using the trained machine learning model, the second image to identify the second portion of the second face as a second region of interest in the second image and to determine a second dominant skin tone associated with the region of interest; and cause, via a first user interface, the second dominant skin tone to be presented on the client device ( Odizzio teaches at Paragraph 0064 that lipstick is applied to a lip region and foundation is applied to the entire face. Eyeshadow is applied to regions of skin surrounding the eye. Odizzio teaches determining a region (e.g., eyeshadow region) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the eyeshadow region corresponding to the respective face of the particular look, the color values of the virtual eyeshadow product as applied to the particular look can be found in FIG. 9E. Odizzio teaches determining a region (e.g., entire face) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the entire face corresponding to the respective face of the particular look, the color values of the virtual foundation product as applied to the particular look can be found in FIG. 9D. Odizzio teaches at FIG. 9J and Paragraph 0097 that in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look. Using menu 918j (shown a virtual eyeshadow product corresponding to the eyeshadow region of the particular look and a virtual foundation product corresponding to general facial region of the particular look), a user can select and apply the virtual makeup products, for example, as previously described. The color values of the eyeshadow product as applied to the eyeshadow region of the particular look are shown in FIG. 9E. The color values of the foundation product as applied to the facial region of the particular look are shown in FIG. 9D. Odizzio’s facial features or facial characteristics refer to eyes, nose, mouth, hair color and texture, and skin tone in relation to the color values of a facial region in a recommended look to be matched with the corresponding facial region of the loaded face image. Odizzio teaches at Paragraph 0095 that the loaded image may be analyzed by platform 120 (e.g., by using the previously described feature detection processes and/or any other computer vision processes) to determine certain characteristics of the face depicted in the loaded image. For example, analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features (e.g., eyes, nose, mouth, etc.), hair color and texture, and skin tone. Based on the detected characteristics, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. In some embodiments, product recommendations may be based on a user-specific product browsing or selection history. For example, the system may recommend a look that includes a makeup product previously selected by the user. Odizzio teaches at Paragraph 0096 that users may be presented with options to rate makeup products and looks both in general and as applied to their specific facial features (such as lip region or eyelid region, or cheek region or hair region, or mouth region, or nose region as key facial features). Odizzio teaches determining a region (e.g., an eye region to apply the eyeshadow skin tone, a cheek region to apply a foundation skin tone) (e.g., applying bronzer to define the sculpt certain facial features of Paragraph 0096) corresponding to the respective face in the recommended look and determining color values for a facial region corresponding to the respective face in the recommended look, the color values corresponding to the respective primary color (skin tone) of the face represented in the respective recommend look associated with the corresponding virtual makeup product applied to the particular look. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant color (the primary color) of the looks. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio further teaches custom looks can be produced by the user by selecting one or more virtual makeup products. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products (the feature of producing different looks have been described in detail in FIGS. 9D-9H with respect to the selection of the colors. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced look (a composite image) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches at Paragraph 0094 that end users can configure their own custom looks by selecting various combinations of makeup products wherein user interface selections of various makeup products have been shown in FIGS. 9D-9G, Paragraph 0088-0090) and the end users can submit their custom looks to virtual makeup platform 120. Accordingly, the custom looks can be displayed in the menu 916i wherein the custom looks figured by the end users are produced by selecting various combinations of makeup products in FIGS. 9D-9G. The particular combinations of multiple virtual makeup products applied to the user face image 908 in FIGS. 9A-9G can be stored as looks 250. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such as product types and menu 918d includes an option to select from a range of colors, shades in a particular line of foundation products. Odizzio teaches at Paragraph 0090 that the menu 942h may include one or more options to refine or filter a list of available makeup products according to various characteristics, e.g., finish, coverage, texture. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at Paragraph 0089 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 that the selectable looks are shown in the graphical menu 916i and looks displayed in menu 916i may include user-specific recommendations for combinations of makeup products and analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features, hair color and texture and skin tone wherein the skin tone of the displayed looks in the menu 916i matches the skin tone of the composite facial image in the area 908). Barron-provisional further teaches the claim limitation that the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: obtain, from a client device, a second image captured by a camera associated with the client device and including a second visual representation of at least a second portion of a second face; process, using the trained machine learning model, the second image to identify the second portion of the second face as a second region of interest in the second image and to determine a second dominant skin tone associated with the region of interest; and cause, via a first user interface, the second dominant skin tone to be presented on the client device ( Barron-provisional teaches at Paragraph 0066 that the determine effects module 128 uses deep learning that indicates changes (skin tone changes) to the live images 134 that should be made based on the AR tutorial video 228….AR effects 219 are determined based on beauty product information 304 that indicates changes that the beauty product 224 of beauty product data 302 will make to the body part 222, 308. For example, a color (skin color or skin tone) such as color 316 may be indicated as the change that is made to the user 238 from the application of the beauty product 224. AR effects 219 may be determined based on the color and an area of body part 222 or body part 308 to apply the color to the live image 134 of the user 238. AR effects 219 are determined based on skin tone where a skin tone of the user 238 is determined and then the application of the beauty product 224 is determined based on the skin tone of the user 238 and at Paragraph 0064 that the determine body part module 124 uses a neural network that is trained to identify different body parts from an image of a human body…may use other information to determine which body part 222 is having the beauty product 224 applied. The determine body part 124 may determine that an eye region has changed colors in an AR tutorial video 228. Accordingly, the skin tone of the body part associated with a particular beauty product 224 is determined by the determine body part module 124. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Barron-provisional’s machine learning models to automatically recognize particular features such as the skin tones in the images when the images in the image repository are indexed using the skin tones as keywords for image retrieval to be incorporated into the system and method of Sartori Odizzio to query the looks (the face images) in the database directly based on the skin tones. One of the ordinary skill in the art would have used the skin tones as keywords for indexing the images. Re Claim 11: The claim 11 encompasses the same scope of invention as that of the claim 10 except additional claim limitation that the second dominant skin tone is determined based at least in part on a second dominant skin tone value and a plurality of predefined color threshold values. Odizzio further teaches the claim limitation that the second dominant skin tone is determined based at least in part on a second dominant skin tone value and a plurality of predefined color threshold values (Odizzio’s range of colors correspond to the claimed predefined color value thresholds. A range of colors means [c1, c2] where c1 represents a lower threshold and c2 represents an upper threshold of color values. With respect to the new claim limitation as highlighted amended after the Board Decision rendered 9/17/2025, Odizzio teaches at Paragraph 0088-0089 that as shown in screen 900d of FIG. 9D, menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. As shown at screen capture 900g in FIG. 9G a user has selected “lips” via menu 906 and “lipstick” via menu 906g. In response the user is presented with a graphical element 916g showing at least one virtual lipstick product or product line and graphical menu 918g including an option to select from a range of colors for a particular line of lipstick products. Odizzio teaches at Paragraph 0094 that a look is a particular combination of two or more makeup products and end users can configure their own custom looks by selecting various combinations of makeup products. Since each makeup product is associated with one of the target skin tones as disclosed at FIGS. 9A-9J, the custom looks are associated with the target skin tones. ). Barron-provisional further teaches the claim limitation that the second dominant skin tone is determined based at least in part on a second dominant skin tone value and a plurality of predefined color threshold values ( Barron-provisional teaches at Paragraph 0243 that the identify product code module 3408 may include a neural network trained with deep learning to identify the product code 3610 and at Paragraph 0125 that current color 1508 indicates a current selection of a variation of the beauty product 224B such as color 316 of FIG. 3 wherein FIG. 15 shows classes of color variations of the beauty product 224B with threshold colors. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0066 that the determine effects module 128 uses deep learning that indicates changes (skin tone changes) to the live images 134 that should be made based on the AR tutorial video 228….AR effects 219 are determined based on beauty product information 304 that indicates changes that the beauty product 224 of beauty product data 302 will make to the body part 222, 308. For example, a color (skin color or skin tone) such as color 316 may be indicated as the change that is made to the user 238 from the application of the beauty product 224. AR effects 219 may be determined based on the color and an area of body part 222 or body part 308 to apply the color to the live image 134 of the user 238. AR effects 219 are determined based on skin tone where a skin tone of the user 238 is determined and then the application of the beauty product 224 is determined based on the skin tone of the user 238 and at Paragraph 0064 that the determine body part module 124 uses a neural network that is trained to identify different body parts from an image of a human body…may use other information to determine which body part 222 is having the beauty product 224 applied. The determine body part 124 may determine that an eye region has changed colors in an AR tutorial video 228. Accordingly, the skin tone of the body part associated with a particular beauty product 224 is determined by the determine body part module 124. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224). Re Claim 12: The claim 12 encompasses the same scope of invention as that of the claim 7 except additional claim limitation that the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: obtain a content item having a second visual representation of an overall beauty aesthetic, extract, from the second visual representation, at least one product parameter that contributes to the overall beauty aesthetic; identify, based on the at least one product parameter, a second plurality of images from the first plurality of images, wherein the second plurality of images includes respective third visual representations of a similar overall beauty aesthetic; and cause, via a first user interface, the second plurality of images to be presented on a client device as responsive to a query. However, Odizzio further teaches the claim limitation that the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: obtain a content item having a second visual representation of an overall beauty aesthetic, extract, from the second visual representation, at least one product parameter that contributes to the overall beauty aesthetic; identify, based on the at least one product parameter, a second plurality of images from the first plurality of images, wherein the second plurality of images includes respective third visual representations of a similar overall beauty aesthetic; and cause, via a first user interface, the second plurality of images to be presented on a client device as responsive to a query (Odizzio teaches at Paragraph 0064 that lipstick is applied to a lip region and foundation is applied to the entire face. Eyeshadow is applied to regions of skin surrounding the eye. Odizzio teaches determining a region (e.g., eyeshadow region) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the eyeshadow region corresponding to the respective face of the particular look, the color values of the virtual eyeshadow product as applied to the particular look can be found in FIG. 9E. Odizzio teaches determining a region (e.g., entire face) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the entire face corresponding to the respective face of the particular look, the color values of the virtual foundation product as applied to the particular look can be found in FIG. 9D. Odizzio teaches at FIG. 9J and Paragraph 0097 that in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look. Using menu 918j (shown a virtual eyeshadow product corresponding to the eyeshadow region of the particular look and a virtual foundation product corresponding to general facial region of the particular look), a user can select and apply the virtual makeup products, for example, as previously described. The color values of the eyeshadow product as applied to the eyeshadow region of the particular look are shown in FIG. 9E. The color values of the foundation product as applied to the facial region of the particular look are shown in FIG. 9D. Odizzio’s facial features or facial characteristics refer to eyes, nose, mouth, hair color and texture, and skin tone in relation to the color values of a facial region in a recommended look to be matched with the corresponding facial region of the loaded face image. Odizzio teaches at Paragraph 0095 that the loaded image may be analyzed by platform 120 (e.g., by using the previously described feature detection processes and/or any other computer vision processes) to determine certain characteristics of the face depicted in the loaded image. For example, analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features (e.g., eyes, nose, mouth, etc.), hair color and texture, and skin tone. Based on the detected characteristics, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. In some embodiments, product recommendations may be based on a user-specific product browsing or selection history. For example, the system may recommend a look that includes a makeup product previously selected by the user. Odizzio teaches at Paragraph 0096 that users may be presented with options to rate makeup products and looks both in general and as applied to their specific facial features (such as lip region or eyelid region, or cheek region or hair region, or mouth region, or nose region as key facial features). Odizzio teaches determining a region (e.g., an eye region to apply the eyeshadow skin tone, a cheek region to apply a foundation skin tone) (e.g., applying bronzer to define the sculpt certain facial features of Paragraph 0096) corresponding to the respective face in the recommended look and determining color values for a facial region corresponding to the respective face in the recommended look, the color values corresponding to the respective primary color (skin tone) of the face represented in the respective recommend look associated with the corresponding virtual makeup product applied to the particular look. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant color (the primary color) of the looks. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio further teaches custom looks can be produced by the user by selecting one or more virtual makeup products. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products (the feature of producing different looks have been described in detail in FIGS. 9D-9H with respect to the selection of the colors. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced look (a composite image) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches at Paragraph 0094 that end users can configure their own custom looks by selecting various combinations of makeup products wherein user interface selections of various makeup products have been shown in FIGS. 9D-9G, Paragraph 0088-0090) and the end users can submit their custom looks to virtual makeup platform 120. Accordingly, the custom looks can be displayed in the menu 916i wherein the custom looks figured by the end users are produced by selecting various combinations of makeup products in FIGS. 9D-9G. The particular combinations of multiple virtual makeup products applied to the user face image 908 in FIGS. 9A-9G can be stored as looks 250. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such as product types and menu 918d includes an option to select from a range of colors, shades in a particular line of foundation products. Odizzio teaches at Paragraph 0090 that the menu 942h may include one or more options to refine or filter a list of available makeup products according to various characteristics, e.g., finish, coverage, texture. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at Paragraph 0089 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 that the selectable looks are shown in the graphical menu 916i and looks displayed in menu 916i may include user-specific recommendations for combinations of makeup products and analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features, hair color and texture and skin tone wherein the skin tone of the displayed looks in the menu 916i matches the skin tone of the composite facial image in the area 908). Re Claim 13: The claim 13 encompasses the same scope of invention as that of the claim 12 except additional claim limitation that the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: cause, via a second user interface, a plurality of target skin tones to be presented on the client device; obtain, via an interaction with the second user interface, a first target skin tone from the plurality of target skin tones; identify a third plurality of images from the second plurality of images that includes a plurality of representations of a plurality of faces that are associated with the first target skin tone; and cause, via a third user interface, the third plurality of images to be presented on the client device. However, Odizzio further teaches the claim limitation that the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: cause, via a second user interface, a plurality of target skin tones to be presented on the client device; obtain, via an interaction with the second user interface, a first target skin tone from the plurality of target skin tones; identify a third plurality of images from the second plurality of images that includes a plurality of representations of a plurality of faces that are associated with the first target skin tone; and cause, via a third user interface, the third plurality of images to be presented on the client device (Odizzio teaches at Paragraph 0064 that lipstick is applied to a lip region and foundation is applied to the entire face. Eyeshadow is applied to regions of skin surrounding the eye. Odizzio teaches determining a region (e.g., eyeshadow region) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the eyeshadow region corresponding to the respective face of the particular look, the color values of the virtual eyeshadow product as applied to the particular look can be found in FIG. 9E. Odizzio teaches determining a region (e.g., entire face) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the entire face corresponding to the respective face of the particular look, the color values of the virtual foundation product as applied to the particular look can be found in FIG. 9D. Odizzio teaches at FIG. 9J and Paragraph 0097 that in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look. Using menu 918j (shown a virtual eyeshadow product corresponding to the eyeshadow region of the particular look and a virtual foundation product corresponding to general facial region of the particular look), a user can select and apply the virtual makeup products, for example, as previously described. The color values of the eyeshadow product as applied to the eyeshadow region of the particular look are shown in FIG. 9E. The color values of the foundation product as applied to the facial region of the particular look are shown in FIG. 9D. Odizzio’s facial features or facial characteristics refer to eyes, nose, mouth, hair color and texture, and skin tone in relation to the color values of a facial region in a recommended look to be matched with the corresponding facial region of the loaded face image. Odizzio teaches at Paragraph 0095 that the loaded image may be analyzed by platform 120 (e.g., by using the previously described feature detection processes and/or any other computer vision processes) to determine certain characteristics of the face depicted in the loaded image. For example, analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features (e.g., eyes, nose, mouth, etc.), hair color and texture, and skin tone. Based on the detected characteristics, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. In some embodiments, product recommendations may be based on a user-specific product browsing or selection history. For example, the system may recommend a look that includes a makeup product previously selected by the user. Odizzio teaches at Paragraph 0096 that users may be presented with options to rate makeup products and looks both in general and as applied to their specific facial features (such as lip region or eyelid region, or cheek region or hair region, or mouth region, or nose region as key facial features). Odizzio teaches determining a region (e.g., an eye region to apply the eyeshadow skin tone, a cheek region to apply a foundation skin tone) (e.g., applying bronzer to define the sculpt certain facial features of Paragraph 0096) corresponding to the respective face in the recommended look and determining color values for a facial region corresponding to the respective face in the recommended look, the color values corresponding to the respective primary color (skin tone) of the face represented in the respective recommend look associated with the corresponding virtual makeup product applied to the particular look. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant color (the primary color) of the looks. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio further teaches custom looks can be produced by the user by selecting one or more virtual makeup products. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products (the feature of producing different looks have been described in detail in FIGS. 9D-9H with respect to the selection of the colors. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced look (a composite image) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches at Paragraph 0094 that end users can configure their own custom looks by selecting various combinations of makeup products wherein user interface selections of various makeup products have been shown in FIGS. 9D-9G, Paragraph 0088-0090) and the end users can submit their custom looks to virtual makeup platform 120. Accordingly, the custom looks can be displayed in the menu 916i wherein the custom looks figured by the end users are produced by selecting various combinations of makeup products in FIGS. 9D-9G. The particular combinations of multiple virtual makeup products applied to the user face image 908 in FIGS. 9A-9G can be stored as looks 250. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such as product types and menu 918d includes an option to select from a range of colors, shades in a particular line of foundation products. Odizzio teaches at Paragraph 0090 that the menu 942h may include one or more options to refine or filter a list of available makeup products according to various characteristics, e.g., finish, coverage, texture. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at Paragraph 0089 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 that the selectable looks are shown in the graphical menu 916i and looks displayed in menu 916i may include user-specific recommendations for combinations of makeup products and analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features, hair color and texture and skin tone wherein the skin tone of the displayed looks in the menu 916i matches the skin tone of the composite facial image in the area 908). Barron-provisional further teaches the claim limitation that the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: cause, via a second user interface, a plurality of target skin tones to be presented on the client device (Barron-provisional teaches at Paragraph 0066 that the determine effects module 128 uses deep learning that indicates changes (skin tone changes) to the live images 134 that should be made based on the AR tutorial video 228….AR effects 219 are determined based on beauty product information 304 that indicates changes that the beauty product 224 of beauty product data 302 will make to the body part 222, 308. For example, a color (skin color or skin tone) such as color 316 may be indicated as the change that is made to the user 238 from the application of the beauty product 224. AR effects 219 may be determined based on the color and an area of body part 222 or body part 308 to apply the color to the live image 134 of the user 238. AR effects 219 are determined based on skin tone where a skin tone of the user 238 is determined and then the application of the beauty product 224 is determined based on the skin tone of the user 238 and at Paragraph 0064 that the determine body part module 124 uses a neural network that is trained to identify different body parts from an image of a human body…may use other information to determine which body part 222 is having the beauty product 224 applied. The determine body part 124 may determine that an eye region has changed colors in an AR tutorial video 228. Accordingly, the skin tone of the body part associated with a particular beauty product 224 is determined by the determine body part module 124. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224); obtain, via an interaction with the second user interface, a first target skin tone from the plurality of target skin tones (Barron-provisional teaches at Paragraph 0066 that the determine effects module 128 uses deep learning that indicates changes (skin tone changes) to the live images 134 that should be made based on the AR tutorial video 228….AR effects 219 are determined based on beauty product information 304 that indicates changes that the beauty product 224 of beauty product data 302 will make to the body part 222, 308. For example, a color (skin color or skin tone) such as color 316 may be indicated as the change that is made to the user 238 from the application of the beauty product 224. AR effects 219 may be determined based on the color and an area of body part 222 or body part 308 to apply the color to the live image 134 of the user 238. AR effects 219 are determined based on skin tone where a skin tone of the user 238 is determined and then the application of the beauty product 224 is determined based on the skin tone of the user 238 and at Paragraph 0064 that the determine body part module 124 uses a neural network that is trained to identify different body parts from an image of a human body…may use other information to determine which body part 222 is having the beauty product 224 applied. The determine body part 124 may determine that an eye region has changed colors in an AR tutorial video 228. Accordingly, the skin tone of the body part associated with a particular beauty product 224 is determined by the determine body part module 124. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224); identify a third plurality of images from the second plurality of images that includes a plurality of representations of a plurality of faces that are associated with the first target skin tone; and cause, via a third user interface, the third plurality of images to be presented on the client device. (Barron-provisional teaches at Paragraph 0066 that the determine effects module 128 uses deep learning that indicates changes (skin tone changes) to the live images 134 that should be made based on the AR tutorial video 228….AR effects 219 are determined based on beauty product information 304 that indicates changes that the beauty product 224 of beauty product data 302 will make to the body part 222, 308. For example, a color (skin color or skin tone) such as color 316 may be indicated as the change that is made to the user 238 from the application of the beauty product 224. AR effects 219 may be determined based on the color and an area of body part 222 or body part 308 to apply the color to the live image 134 of the user 238. AR effects 219 are determined based on skin tone where a skin tone of the user 238 is determined and then the application of the beauty product 224 is determined based on the skin tone of the user 238 and at Paragraph 0064 that the determine body part module 124 uses a neural network that is trained to identify different body parts from an image of a human body…may use other information to determine which body part 222 is having the beauty product 224 applied. The determine body part 124 may determine that an eye region has changed colors in an AR tutorial video 228. Accordingly, the skin tone of the body part associated with a particular beauty product 224 is determined by the determine body part module 124. Barron-provisional teaches at Paragraph 0085 that variations 314 of the beauty product 224 includes color 316…stored images of beauty products may be used for identifying the beauty product 224 from images of the beauty product 224. Barron-provisional teaches at Paragraph 0227-0230 that the determine beauty product module 136 determines the beauty product 224 via UI screens presented to the presenter 236…presenter 236 selects beauty product 224 by making selections from edit menu 2402 and beauty product list 2908A or beauty product list 2908B….the determine beauty product module 126 uses a trained neural network to perform object recognition of the beauty product 224 so that the presenter 236 does not have to enter information regarding the beauty product 224….retrieves beauty product data 302 from a database such as beauty products 2018 of FIG. 20. Images of beauty product 326 may be used to request confirmation of the presenter 236 and/or to display the beauty product 224 such as in FIG. 4 where two beauty products 224 are displayed as beauty product 224 and beauty product 224B…the identify product code module 3408 of the determine beauty product module 126 of FIG. 34 may use the tutorial effects 218 to determine a color 316 of the beauty product 224 and use the color 316 to assist in identifying the beauty product 224). Re Claim 14: The claim 14 encompasses the same scope of invention as that of the claim 12 except additional claim limitation that at least one beauty product contributes to the similar overall beauty aesthetic of a second image from the second plurality of images; and the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: receive, from a client device, a request to render the at least one beauty product on a user content item; and present, on a display of the client device, a rendering of the at least one beauty product concurrent with a presentation of the user content item. Sartori Odizzio teaches the claim limitation that at least one beauty product contributes to the similar overall beauty aesthetic of a second image from the second plurality of images; and the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: receive, from a client device, a request to render the at least one beauty product on a user content item; and present, on a display of the client device, a rendering of the at least one beauty product concurrent with a presentation of the user content item ( Sartori Odizzio teaches at Paragraph 0091 in response to receiving a makeup product selection and a request to apply the selected product, a composite image 908 including a base image of a human face and a makeup image based on the selected product may be displayed to the user and at Paragraph 0094-0096 that selectable “Looks” are displayed in response to a user input selecting the “Looks” category…end users can configure their own custom looks by selecting various combinations of makeup products and can submit their custom looks to virtual makeup platform 120 and at Paragraph 0097 that the graphical user interface is displayed in response to a user selection of a particular look via menu 916i…in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look). Re Claim 15: The claim 15 encompasses the same scope of invention as that of the claim 12 except additional claim limitation that the user content item is captured by a camera associated with the client device. Sartori Odizzio teaches the claim limitation that the user content item is captured by a camera associated with the client device ( Sartori Odizzio teaches at Paragraph 0042 that image capture module 234 may include the software and/or hardware for capturing images at a client device for virtual makeup application…may include a digital camera and at Paragraph 0077 that makeup images may be generated and composited with base images captured at different vantage points. Sartori Odizzio teaches at Paragraph 0087 the user may be presented with screen 900b of FIG. 9B that includes an option 912b to “snap a selfie” and option 914b to choose a model….a user may be allowed to capture an image via an image capture device associated with the client device 102….the selected image may be from the user’s own photo library and at Paragraph 0090 that a composite image 908 includes a base image of a human face and a makeup image based on the selected product may be displayed to the user and at Paragraph 0095 that a user may load an image of their face). Re Claim 16: The claim 16 encompasses the same scope of invention as that of the claim 12 except additional claim limitation that the at least one product parameter includes at least one of: a color; a gloss; an opacity; a glitter; a glitter size; a glitter density; a shape; or an intensity. Sartori Odizzio teaches the claim limitation that the at least one product parameter includes at least one of: a color; a gloss; an opacity; a glitter; a glitter size; a glitter density; a shape; or an intensity ( Sartori Odizzio teaches at Paragraph 0047 that each virtual makeup product 246 represents a combination of an effect 248 along with a configuration 244 for the effect wherein effects 248 represent combinations of various layers of visual filters 249 and a visual filter 249 can generally be understood as a particular visual effect, e.g., blur, color overlay and at Paragraph 0049-0056 that examples of filters that may be applied in this context include….glitter. image filters include lighting, blur, threshold (highlight high intensity areas of the image….the decolorate filter operates over the skin of a person depicted in the base image…while maintaining other properties such as tone….a light-focus filter can be used to simulate the effect of reflected light on a portion of the base image. This has particular application for simulating the effect of a high gloss makeup product, e.g., lip gloss….parameters for a color dot filter can include shape, color, type, an intensity multiplier, spread, opacity). Re Claim 17: The claim 17 is in parallel with the claim 1 in a method form. The claim 17 is subject to the same rationale of rejection as the claim 1. Re Claim 20: The claim 20 encompasses the same scope of invention as that of the claim 17 except additional claim limitation that obtaining an input content item having a visual representation of an overall beauty aesthetic, extracting, from the visual representation, at least one product parameter that contributes to the overall beauty aesthetic; identifying, based on the at least one product parameter, a third plurality of content items from the first plurality of content items, wherein the third plurality of content items includes respective second visual representations of a similar overall beauty aesthetic; determining, based at least in part on the first target skin tone and the respective dominant skin tones associated with the third plurality of content items, a fourth plurality of content items from the third plurality of content items, wherein the respective dominant skin tone associated with the fourth plurality of content items includes the first target skin tone; and causing, via a first user interface, the fourth plurality of images to be presented on a client device. However, Odizzio further teaches the claim limitation that obtaining an input content item having a visual representation of an overall beauty aesthetic, extracting, from the visual representation, at least one product parameter that contributes to the overall beauty aesthetic; identifying, based on the at least one product parameter, a third plurality of content items from the first plurality of content items, wherein the third plurality of content items includes respective second visual representations of a similar overall beauty aesthetic; determining, based at least in part on the first target skin tone and the respective dominant skin tones associated with the third plurality of content items, a fourth plurality of content items from the third plurality of content items, wherein the respective dominant skin tone associated with the fourth plurality of content items includes the first target skin tone; and causing, via a first user interface, the fourth plurality of images to be presented on a client device (Odizzio teaches at Paragraph 0064 that lipstick is applied to a lip region and foundation is applied to the entire face. Eyeshadow is applied to regions of skin surrounding the eye. Odizzio teaches determining a region (e.g., eyeshadow region) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the eyeshadow region corresponding to the respective face of the particular look, the color values of the virtual eyeshadow product as applied to the particular look can be found in FIG. 9E. Odizzio teaches determining a region (e.g., entire face) in the particular look of FIG. 9J corresponding to the respective face of the particular look and determining color values for the entire face corresponding to the respective face of the particular look, the color values of the virtual foundation product as applied to the particular look can be found in FIG. 9D. Odizzio teaches at FIG. 9J and Paragraph 0097 that in response to a user selection of a particular look, a menu 918j is displayed showing the user selectable makeup products that are included in the selected look. Using menu 918j (shown a virtual eyeshadow product corresponding to the eyeshadow region of the particular look and a virtual foundation product corresponding to general facial region of the particular look), a user can select and apply the virtual makeup products, for example, as previously described. The color values of the eyeshadow product as applied to the eyeshadow region of the particular look are shown in FIG. 9E. The color values of the foundation product as applied to the facial region of the particular look are shown in FIG. 9D. Odizzio’s facial features or facial characteristics refer to eyes, nose, mouth, hair color and texture, and skin tone in relation to the color values of a facial region in a recommended look to be matched with the corresponding facial region of the loaded face image. Odizzio teaches at Paragraph 0095 that the loaded image may be analyzed by platform 120 (e.g., by using the previously described feature detection processes and/or any other computer vision processes) to determine certain characteristics of the face depicted in the loaded image. For example, analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features (e.g., eyes, nose, mouth, etc.), hair color and texture, and skin tone. Based on the detected characteristics, platform 120 may generate one or more recommended looks that are specifically tailored to the characteristics of the face in the loaded image. In some embodiments, product recommendations may be based on a user-specific product browsing or selection history. For example, the system may recommend a look that includes a makeup product previously selected by the user. Odizzio teaches at Paragraph 0096 that users may be presented with options to rate makeup products and looks both in general and as applied to their specific facial features (such as lip region or eyelid region, or cheek region or hair region, or mouth region, or nose region as key facial features). Odizzio teaches determining a region (e.g., an eye region to apply the eyeshadow skin tone, a cheek region to apply a foundation skin tone) (e.g., applying bronzer to define the sculpt certain facial features of Paragraph 0096) corresponding to the respective face in the recommended look and determining color values for a facial region corresponding to the respective face in the recommended look, the color values corresponding to the respective primary color (skin tone) of the face represented in the respective recommend look associated with the corresponding virtual makeup product applied to the particular look. The recommended looks (FIGS. 9I-9J and Paragraph 0095-0096) are selected or filtered from the looks in the repositories/databases 124 (see Paragraph 0039) based on the detected characteristics (the target skin tone) of the loaded composite facial image 908 and based on the primary and/or secondary color combinations of the virtual products included in the looks. Odizzio’s recommended looks displayed in the menu 916i of FIG. 9i are tied to the dominant color (the primary color) of the looks. Odizzio teaches at Paragraph 0096 that product recommendations may be based on subjective or objective rules including the primary vs. secondary color combinations (the primary color is the dominant skin tone/color) and a retailer may individually define certain makeup recommendations rules to generate looks that are selectable via menu 916i. The look recommendations may be automatically generated using machine-learning based models. The claimed first plurality of content items can be specifically mapped to Odizzio’s virtual makeup looks stored in repositories/databases 124 of Paragraph 0039 (a look is composite facial image applied with the one or more virtual makeup products) of FIGS. 9I-9J produced by applying one or more virtual makeup products according to one of the user interfaces at FIGS. 9D-9H. The claimed second plurality of content items are then specifically mapped to Odizzio’s recommended looks of Paragraph 0095 selected from the virtual makeup looks in the databases 124 based on the matching of the skin tone of the facial image 908 and the skin tone of the virtual makeup looks in the storage. Odizzio clearly shows at Paragraph 0095 that the recommended looks (second plurality of content items) are generated by the analysis module from the virtual makeup looks stored in the database 124 as a result of matching the characteristics of the virtual makeup looks with the characteristics of the facial image 908 where the two or more target skin tones associated with the virtual makeup products are applied to the facial image 908 to generate the recommended looks. Odizzio teaches a target characteristic such as a skin tone of the facial image can be obtained in a loaded virtual makeup look via interaction with the user interface to find additional content items such as recommended looks and/or custom looks and the loaded virtual makeup look have been produced via the process of FIGS. 9D-9H via the selection of a range of skin colors of FIGS. 9D-9H. Odizzio teaches at Paragraph 0095 determining certain characteristics of the face depicted in the loaded image wherein the certain characteristics (key features) include hair color and skin tone and the loaded image with the determined skin tone is used to find product recommendations wherein the platform 120 may generate one or more recommended looks (additional content items) that are specifically tailored to the characteristics (skin tone) of the face in the loaded image. Odizzio further teaches custom looks can be produced by the user by selecting one or more virtual makeup products. Odizzio teaches at FIG. 2A-2B Paragraph 0044 that particular combinations of multiple virtual makeup products, e.g., a virtual shadow and a virtual blush may be stored as looks 250 (which are produced by the virtual of makeup image generator 228 and image compositing 230 and the base image processing 232) and virtual combinations of multiple virtual makeup products have been performed in the user interface at FIGS. 9D-9H and at Paragraph 0094 that “end users can configure their own custom looks (e.g., by selecting various combinations of makeup products) and can submit their custom looks to virtual makeup platform 120 (for storage)”. Odizzio teaches at Paragraph 0083 that a makeup image shape 702 based on the application of a virtual eyeshadow product may be composed of multiple shapes 704b, 706b and 708b and alternative combinations 710b-724b illustrate how alternative shapes can be arranged to produce different looks when applied to a base image and a user may be presented with one or more predefined makeup application options with which to apply a selected one or more virtual makeup products (the feature of producing different looks have been described in detail in FIGS. 9D-9H with respect to the selection of the colors. Accordingly, Odizzio made it clear that the different looks are produced and stored in a library of looks by applying the selected one or more virtual makeup products such as eyeshadow virtual makeup products in FIGS. 9E-9F. A user produced look (a composite image) can be loaded as a load image disclosed in Paragraph 0095. Odizzio teaches at Paragraph 0094 that end users can configure their own custom looks by selecting various combinations of makeup products wherein user interface selections of various makeup products have been shown in FIGS. 9D-9G, Paragraph 0088-0090) and the end users can submit their custom looks to virtual makeup platform 120. Accordingly, the custom looks can be displayed in the menu 916i wherein the custom looks figured by the end users are produced by selecting various combinations of makeup products in FIGS. 9D-9G. The particular combinations of multiple virtual makeup products applied to the user face image 908 in FIGS. 9A-9G can be stored as looks 250. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such as product types and menu 918d includes an option to select from a range of colors, shades in a particular line of foundation products. Odizzio teaches at Paragraph 0090 that the menu 942h may include one or more options to refine or filter a list of available makeup products according to various characteristics, e.g., finish, coverage, texture. Odizzio teaches at Paragraph 0088 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at Paragraph 0089 that the graphical user interface may also include one or more product selection menus 906 enabling a user to filter available virtual makeup products according to various filters such product types and the menu 918d includes an option to select from a range of colors, shades, in a particular line of foundation products. Odizzio teaches at FIGS. 9I-9J identifying at least one additional look in the menu 916i of FIGS. 9I-9J from the plurality of additional looks in the storage of the virtual makeup platform with the matching skin tone. Odizzio teaches at FIGS. 9I-9J and Paragraph 0094-0097 that the selectable looks are shown in the graphical menu 916i and looks displayed in menu 916i may include user-specific recommendations for combinations of makeup products and analysis performed on the image may detect a general shape of the face, shapes and arrangement of certain key features, hair color and texture and skin tone wherein the skin tone of the displayed looks in the menu 916i matches the skin tone of the composite facial image in the area 908). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIN CHENG WANG whose telephone number is (571)272-7665. The examiner can normally be reached Mon-Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JIN CHENG WANG/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Dec 28, 2021
Application Filed
Oct 22, 2022
Non-Final Rejection — §103
Mar 27, 2023
Response Filed
Jun 21, 2023
Non-Final Rejection — §103
Sep 27, 2023
Response Filed
Jan 08, 2024
Final Rejection — §103
Mar 12, 2024
Response after Non-Final Action
Apr 10, 2024
Notice of Allowance
Jun 10, 2024
Response after Non-Final Action
Jun 21, 2024
Response after Non-Final Action
Sep 13, 2024
Response after Non-Final Action
Nov 15, 2024
Response after Non-Final Action
Nov 18, 2024
Response after Non-Final Action
Nov 19, 2024
Response after Non-Final Action
Nov 19, 2024
Response after Non-Final Action
Sep 16, 2025
Response after Non-Final Action
Nov 17, 2025
Request for Continued Examination
Nov 25, 2025
Response after Non-Final Action
Jan 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594883
DISPLAY DEVICE FOR DISPLAYING PATHS OF A VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12597086
Tile Region Protection in a Graphics Processing System
2y 5m to grant Granted Apr 07, 2026
Patent 12592012
METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM FOR COLLAGE MAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586270
GENERATING AND MODIFYING DIGITAL IMAGES USING A JOINT FEATURE STYLE LATENT SPACE OF A GENERATIVE NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12579709
IMAGE SPECIAL EFFECT PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
59%
Grant Probability
69%
With Interview (+10.3%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 832 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month