Prosecution Insights
Last updated: April 19, 2026
Application No. 18/740,605

AI-Based Predictive Analysis Engine for Beauty Trends

Final Rejection §101§103
Filed
Jun 12, 2024
Examiner
ANSARI, AZAM A
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Elc Management LLC
OA Round
4 (Final)
48%
Grant Probability
Moderate
5-6
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
162 granted / 338 resolved
-4.1% vs TC avg
Strong +50% interview lift
Without
With
+49.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
38 currently pending
Career history
376
Total Applications
across all art units

Statute-Specific Performance

§101
34.2%
-5.8% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101 §103
DETAILED ACTION Response to Amendment This action is in response to the response to the amendment filed on 02/06/2026. Claims 1, 3, 4, 6, 8, 10, 11, and 13 have been amended and claims 2 and 9 have been canceled. Claims 1, 3-8, 10-14, and 21 are pending and currently under consideration for patentability. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Inventorship This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8, 10-14, and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to a judicial exception (i.e., a law of nature, natural phenomenon, or abstract idea) without significantly more. Step 1: In a test for patent subject matter eligibility, claims 1, 3-8, 10-14, and 21 are found to be in accordance with Step 1 (see 2019 Revised Patent Subject Matter Eligibility), as they are related to a process, machine, manufacture, or composition of matter. Claims 1, 3-7 and 21 recite a method and claims 8, 10-14 recite a system. When assessed under Step 2A, Prong I, they are found to be directed towards an abstract idea. The rationale for this finding is explained below: Step 2A, Prong I: Under Step 2A, Prong I, independent claims 1 and 8 are directed to an abstract idea without significantly more, as they all recite a judicial exception. Claims 1 and 8 recite limitations directed to the abstract idea including obtaining social media content describing one or more trending looks having a popularity metric above a popularity threshold; analyzing the social media content to identify one or more characteristics of the one or more trending looks; comparing the one or more characteristics to a set of beauty products to identify at least one beauty product corresponding to the one or more characteristics; […] generate beauty content associated with the at least one beauty product including an image of a person having visual features resembling the one or more trending looks and a reference to the at least one beauty product which can be used in creating the one or more trending looks, wherein the beauty content is generated in real-time as the one or more trending looks are identified; and providing the beauty content for display to a user. These further limitations are not seen as any more than the judicial exception. Claims 1 and 8 recite additional limitations including by one or more processors; “applying the at least one beauty product and the one or more characteristics of the one or more trending looks to a generative artificial intelligence (AI) model […]; and wherein the generative AI model is trained on materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics; and wherein the generative AI model includes a text encoder trained on a set of images labeled with text and an image encoder trained on salient visual features in the set of images labeled with descriptions of the salient visual features, such that when the one or more characteristics of the one or more trending looks is applied to the text encoder, the text encoder identifies a subset of images labeled with text that corresponds to the one or more characteristics, the image encoder identifies salient visual features within the subset of images that correspond to the one or more characteristics, and the generative AI model combines the salient visual features within the subset to create the beauty content”. The claims are considered to be an abstract idea under certain methods of organizing human activity because the claims are directed to commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) such as providing beauty content to users based on a comparison of beauty characteristics and products. The claims are also considered to be an abstract idea under mental processes because the claims are directed to concepts performed in the human mind (including an observation, evaluation, judgment, opinion) such as obtaining data (i.e. media content describing beauty products), analyzing data (i.e. media content to identify characteristics), comparing data (i.e. characteristics and beauty products to identify beauty product); and providing data (i.e. beauty content). Therefore, under Step 2A, Prong I, claims 1 and 8 are directed towards an abstract idea. Step 2A, Prong II: Step 2A, Prong II is to determine whether any claim recites any additional element that integrate the judicial exception (abstract idea) into a practical application. Claims 1 and 8 recite additional limitations including by one or more processors; “applying the at least one beauty product and the one or more characteristics of the one or more trending looks to a generative artificial intelligence (AI) model […]; and wherein the generative AI model is trained on materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics; and wherein the generative AI model includes a text encoder trained on a set of images labeled with text and an image encoder trained on salient visual features in the set of images labeled with descriptions of the salient visual features, such that when the one or more characteristics of the one or more trending looks is applied to the text encoder, the text encoder identifies a subset of images labeled with text that corresponds to the one or more characteristics, the image encoder identifies salient visual features within the subset of images that correspond to the one or more characteristics, and the generative AI model combines the salient visual features within the subset to create the beauty content”. The limitations reciting – “by one or more processors” and applying data (i.e. beauty product and characteristics of trending looks) “to a generative artificial intelligence (AI) model” are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. processors and AI model, and therefore do not integrate the abstract idea into a practical application. The courts decided that although the additional elements did limit the use of the abstract idea, the court explained that this type of limitation merely confines the use of the abstract idea to a particular technological environment and this fails to add an inventive concept to the claims (See Affinity Labs of Texas v. DirecTV, LLC,). Under Step 2A, Prong II, these claims remain directed towards an abstract idea. Step 2B: Claims 1 and 8 recite additional limitations including by one or more processors; “applying the at least one beauty product and the one or more characteristics of the one or more trending looks to a generative artificial intelligence (AI) model […]; and wherein the generative AI model is trained on materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics; and wherein the generative AI model includes a text encoder trained on a set of images labeled with text and an image encoder trained on salient visual features in the set of images labeled with descriptions of the salient visual features, such that when the one or more characteristics of the one or more trending looks is applied to the text encoder, the text encoder identifies a subset of images labeled with text that corresponds to the one or more characteristics, the image encoder identifies salient visual features within the subset of images that correspond to the one or more characteristics, and the generative AI model combines the salient visual features within the subset to create the beauty content”. The limitations reciting – “by one or more processors” and applying data (i.e. beauty product and characteristics of trending looks) “to a generative artificial intelligence (AI) model” do not integrate the judicial exception (abstract idea) into a practical application because of the analysis provided in Step 2A, Prong II. Claims 1 and 8 also recite additional limitations “wherein the generative AI model is trained on materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics; and wherein the generative AI model includes a text encoder trained on a set of images labeled with text and an image encoder trained on salient visual features in the set of images labeled with descriptions of the salient visual features, such that when the one or more characteristics of the one or more trending looks is applied to the text encoder, the text encoder identifies a subset of images labeled with text that corresponds to the one or more characteristics, the image encoder identifies salient visual features within the subset of images that correspond to the one or more characteristics, and the generative AI model combines the salient visual features within the subset to create the beauty content”. Merely, training a machine learning model with known inputs (e.g. materials promoting products and characteristics of the products) in order to determine an output (e.g. beauty products) is seen as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). It has been well-known since at least 1996 that the “Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.” (See Wikipedia: Machine learning: The definition "without being explicitly programmed" is often attributed to Arthur Samuel, who coined the term "machine learning" in 1959, but the phrase is not found verbatim in this publication, and may be a paraphrase that appeared later. Confer "Paraphrasing Arthur Samuel (1959), the question is: How can computers learn to solve problems without being explicitly programmed?" in Koza, John R.; Bennett, Forrest H.; Andre, David; Keane, Martin A. (1996). Automated Design of Both the Topology and Sizing of Analog Electrical Circuits Using Genetic Programming. Artificial Intelligence in Design '96. Springer, Dordrecht. pp. 151–170. doi:10.1007/978-94-009-0279-4_9.”). Furthermore, according to ¶ [0010] of U.S. Publication 2008/0050047 to Bashyam discloses that it is conventional or well-known to apply image/text encoding schemes to an adaptive prediction model; “Huffman encoding is one well known species among the various encoding techniques that may be used during compression. For example, the industry standard JPEG image compression algorithm employs Huffman encoding on DCT coefficients (Discrete Cosine Transform factors) extracted from a to-be-compressed input image (typically, a YCrCb coded image). While Huffman encoding may perform well in some instances, the encoding technique of choice for variable length and/or limited length entropy encoding is known as arithmetic encoding. Arithmetic encoding (ARI for short) relies on the maintaining of a running history of recently received un-compressed values (alphabet characters or symbols) and on the maintaining of a fixed or variable prediction model that indicates with fairly good accuracy what next un-compressed value (character or symbol) is most likely to appear in a sampled stream of input data given an input history of finite length. A seminal description of arithmetic encoding may be found in U.S. Pat. No. 4,122,440 issued Oct. 24, 1978 to Langdon, Jr., et al. A more modern example may be found in U.S. Pat. No. 6,990,242 issued Jan. 24, 2006 to Malvar. The latter provides a background explanation regarding a conventional arithmetic encoding scheme and how it may be coupled with an adaptive predicting model.” Claims 1 and 8 do not include additional elements or a combination of elements that result in the claims amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements listed amount to no more than mere instructions to apply an exception using a generic computer component. In addition, the applicant’s specifications describe a “general-purpose processors”, ¶¶ [00107] [00108], for implementing the processor, which do not amount to significantly more than the abstract idea of itself, which is not enough to transform an abstract idea into eligible subject matter. Furthermore, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. Under Step 2B in a test for patent subject matter eligibility, these claims are not patent eligible. Dependent claims 3-7, 21 and 10-14 further recite the method and system of claims 1 and 8, respectively. Dependent claims 3-7, 10-14, and 21 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation fail to establish that the claims are not directed to an abstract idea: Under Step 2A, Prong I, these additional claims only further narrow the abstract idea set forth in claims 1 and 8. For example, claims 3-7, 10-14, and 21 describe the limitations for providing beauty content to users based on a comparison of beauty characteristics and products – which is only further narrowing the scope of the abstract idea recited in the independent claims. Under Step 2A, Prong II, for dependent claims 3-7, 10-14, and 21, there are no additional elements introduced. Thus, they do not present integration into a practical application, or amount to significantly more. Under Step 2B, Dependent claims 7 and 14 recite – “applying, by the one or more processors, text describing the at least one beauty product to a text encoder configured to map the text to a subset of the materials promoting products which were used to train the generative AI model”. However, merely applying inputs (i.e. beauty products or text describing beauty products) to machine learning model trained with known data (e.g. materials promoting products and characteristics of the products) in order to determine an output (e.g. beauty products) is seen as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). It has been well-known since at least 1996 that the “Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.” (See Wikipedia: Machine learning: The definition "without being explicitly programmed" is often attributed to Arthur Samuel, who coined the term "machine learning" in 1959, but the phrase is not found verbatim in this publication, and may be a paraphrase that appeared later. Confer "Paraphrasing Arthur Samuel (1959), the question is: How can computers learn to solve problems without being explicitly programmed?" in Koza, John R.; Bennett, Forrest H.; Andre, David; Keane, Martin A. (1996). Automated Design of Both the Topology and Sizing of Analog Electrical Circuits Using Genetic Programming. Artificial Intelligence in Design '96. Springer, Dordrecht. pp. 151–170. doi:10.1007/978-94-009-0279-4_9.”). The dependent claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Additionally, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. As discussed above with respect to integration of the abstract idea into a practical application, the additional claims do not provide any additional elements that would amount to significantly more than the judicial exception. Under Step 2B, these claims are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-7, 21 and 8, 10-14 are method and system claims, respectively, with substantially indistinguishable features between each group. For purposes of compact prosecution, the Office has grouped the common method, system and non-transitory computer readable storage medium claims in applying applicable prior art. Claim(s) 1, 3-8, 10-14, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent 12,293,604 to Lotti in view of U.S. Publication 2023/0025533 to Lindgren, U.S. Publication 2023/0169566 to James, and U.S. Publication 2024/0346774 to Kwak. With respect to Claim 1: Lotti teaches: A method for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends, the method comprising: obtaining, by one or more processors, [[social]] media content describing one or more trending looks […] (i.e. receiving image of human face, which is media content describing a look) (Lotti: Col. 30 Lines 11-17 “At block 291, processing logic receives 2D image data corresponding to a 2D image of a human face. In some embodiments, the 2D image is a frontal image of a human face (e.g., the subject's face). The 2D image may be selected (e.g., by a user) from multiple images ( e.g., stored on a client device such as a mobile phone, etc.) or may be captured using a camera.” Furthermore, as cited in Col. 19 Lines 51-58 “In some embodiments, user preferences 222 may include characteristics such as the user's preferred or desired style, desired beauty products color, etc. For example, a user preference can include a bold style rather than a natural look style as pertaining to applied beauty products. In some embodiments, user preferences 222 includes other user attributes such as age and/or lifestyle, etc.”); analyzing, by the one or more processors, the [[social]] media content to identify one or more characteristics of the one or more trending looks (i.e. analyzing the face image in order to identify textual identifiers/characteristics of the trending or desired look) (Lotti: Col. 30 Lines 24-30 “At block 293, processing logic determines (e.g., generates, etc.), based on the 2D image data, a textual identifier that describes a facial feature of the human face. In some embodiments, the textual identifier is determined based at least in part on the 3D model. The textual identifier may include textual information such as information described herein above with respect to FIG. 2A.” Furthermore, as cited in Col. 19 Lines 51-58 “In some embodiments, user preferences 222 may include characteristics such as the user's preferred or desired style, desired beauty products color, etc. For example, a user preference can include a bold style rather than a natural look style as pertaining to applied beauty products. In some embodiments, user preferences 222 includes other user attributes such as age and/or lifestyle, etc.”); comparing, by the one or more processors, the one or more characteristics to a set of beauty products to identify at least one beauty product corresponding to the one or more characteristics (i.e. comparing textual identifier with database to generate a prompt to identify related beauty products) (Lotti: Col. 30 Lines 47-57 “At block 295, processing logic generates a prompt that describes the information related to at least some of the plurality of beauty products and information identifying the textual identifier. In some embodiments, the prompt includes textual information associated with the textual identifier and the information relevant to the textual identifier identified in the database (e.g., at block 294). At block 296, processing logic provides, to a generative machine learning model, the prompt including information identifying the textual identifier and contextual information such as relevant information identified from the database.”); applying, by the one or more processors, the at least one beauty product and the one or more characteristics of the one or more trending looks to a generative artificial intelligence (AI) model to generate beauty content associated with the at least one beauty product including [[an image of a person having visual features resembling the one or more trending looks]] and a reference to the at least one beauty product which can be used in creating the one or more trending looks, […] (i.e. applying machine learning ai to identify beauty products including visual characteristics describing a desired or trending look and a reference to an identified beauty product creating that trending look, wherein the machine learning model is trained on beauty products) (Lotti: Col. 30 Lines 57-67 “In some embodiments, information associated with the textual identifier is provided to a generative machine learning model (e.g., generative machine learning model 250 of FIG. 2A). The generative machine learning model may be trained to output indications of one or more beauty products related to the facial feature described by the textual identifier. In some embodiments, the generative machine learning model may output an identification of a beauty product related to the facial feature described by the textual identifier based on the input prompt.” Furthermore, as cited in Col. 11 Lines 30-36 “In some embodiments, a machine learning model (e.g., also referred to as an "artificial intelligence (AI) model" herein) can include a discriminative machine learning model (also referred to as "discriminative AI model" herein), a generative machine learning model (also referred to as "generative AI model" herein), and/or other machine learning model.” Furthermore, as cited in Col. 19 Lines 48-58 “In some embodiments, user preferences 222 may include substantially non-detectable attributes of the user (e.g., non-detectable based on the 2D image 220). In some embodiments, user preferences 222 may include characteristics such as the user's preferred or desired style, desired beauty products color, etc. For example, a user preference can include a bold style rather than a natural look style as pertaining to applied beauty products. In some embodiments, user preferences 222 includes other user attributes such as age and/or lifestyle, etc.” Furthermore, as cited in Col. 29 Lines 13-64 “The identified objects 270 may be generated further in response to relevant information ( e.g., relevant to the textual identifiers 240A-N and/or the user preferences 222) from the beauty products database 125 that is additionally included in the prompt 164. For example, the identified objects 270 may include beauty products ( e.g., object 260A and object 260B) that fit one or more facial features identified and/or described by textual identifier(s) 240A-240N while also complying with the user preferences 222. In an example, the identified objects 270 may include a selection of artificial eyelashes (e.g., one or more beauty products) that compliment and/or are a "best-fit" for a particular eye shape (e.g., almond shape, etc.) and/or eye size, where the eye shape/size is indicated by the textual identifier(s) 240A-240N.…In some embodiments, an indication of the identified object (e.g., beauty product) is provided for display at a GUI of the client device 110. For example, a GUI on the client device 110 may display an image and/or information related to an identified beauty product ( e.g., object 270A) output from the filter 280. The user may then be able to purchase the identified beauty product using one or more inputs via the GUI.”), […]; and providing, by the one or more processors, the beauty content for display to a user (i.e. providing beauty products for display) (Lotti: Col. 31 Lines 1-11 “At block 297, processing logic obtains, from the generative machine learning model, an output identifying a subset of a plurality of beauty products related to the facial feature. In some embodiments, the subset of beauty products includes one or more beauty products identified by the generative machine learning model in the beauty products database. The one or more beauty products may be related to the facial feature described by the textual identifier. At block 298, processing logic provides an indication of at least one beauty product of the subset of the plurality of beauty products for display at a GUI.”). Lotti does not explicitly disclose wherein the generative AI model is trained on materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics; and wherein the generative AI model is trained on encoded materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics; and wherein the generative AI model includes a text encoder trained on a set of images labeled with text and an image encoder trained on salient visual features in the set of images labeled with descriptions of the salient visual features, such that when the one or more characteristics of the one or more trending looks is applied to the text encoder, the text encoder identifies a subset of images labeled with text that corresponds to the one or more characteristics, the image encoder identifies salient visual features within the subset of images that correspond to the one or more characteristics, and the generative AI model combines the salient visual features within the subset to create the beauty content. However, Lindgren further discloses: wherein the generative AI model is trained on materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics (i.e. product vectors comprising materials of beauty products and characteristics of beauty products such as ingredients and functions of ingredients is used to train the machine learning model) (Lindgren: ¶ [0068] “According to an embodiment, product database 400 may be configured to include relevant information concerning beauty, hygiene, and health products, such as to facilitate matching products with individuals. Product database 400 can include detailed product information, such as the non-limiting examples of ingredients, functions of the ingredients, allergens, ingredient free data and the like. Additionally, each product may have a vector representation of the product details stored in the product database 400.” Furthermore, as cited in ¶ [0076] “The training datasets may then be fed into a recurrent neural network 1102 comprising one or more hidden recurrent layers which may embed the input training datasets and extract weighted features which define the datasets as it passes through each of the one or more hidden recurrent layers. The hidden recurrent layers constitute an encoder. The next step is to train the encoder to learn the features of the inputted subset of user response data and the subset of product information such that they may be encoded into a user requirement vector and a product vector, respectively, existing within an encoded feature space 1103. As a next step, product vectors are extracted from the feature space and fed into a decoder to determine one or more beauty products to recommend 1104.”); and wherein the generative AI model includes a text encoder trained on a set of images labeled with text and an image encoder trained on salient visual features in the set of images labeled with descriptions of the salient visual features, such that when the one or more characteristics of the one or more trending looks is applied to the text encoder, the text encoder identifies a subset of images labeled with text that corresponds to the one or more characteristics, the image encoder identifies salient visual features within the subset of images that correspond to the one or more characteristics, and the generative AI model combines the salient visual features within the subset to create the beauty content (i.e. AI model includes text encoder trained on images labeled with description of features in order to identify images with labeled text and an image encoder identifies characteristics such as skin condition or salient visual features within images that correspond to beauty product such as products that treat skin condition, wherein the AI model combines the text encoder that labels the images with text and image encoder that identifies images with certain features or skin conditions in order to create/recommend beauty product that treats skin condition) (Lindgren: ¶ [0054] “According to an embodiment, image segmentation network 204 may comprise a convolutional neural network (CNN) with a trained encoder and decoder. There are a variety of CNN architectures known in the art that may be used for image segmentation such as, for example, U-Net, Fast Fully-connected network (FastFCN), Gated-Stream CNN, DeepLab, and Mask R-CNN to name a few. An encoder may be trained to extract features corresponding to a pre-determined set of skin concern objects. For example, skin concern objects may include, but are not limited to, acne, hyperpigmentation, scarring, ultraviolet (UV) damage, melasma, itchy scalp, etc. The encoder can be trained to extract features and detect one or more skin concern objects that may be present in the processed image in order to classify user skin concerns such that the system 100 may make personalized recommendations. For example, image segmentation network 204 could detect whether a user has hyperpigmentation or melasma, which are different skin conditions and thus require different products to treat the conditions and which would be taken into account when analyzing the image and making product recommendations. The encoder may be trained on a subset of the uploaded images to system 100. Additionally, the encoder may be trained using supervised learning and a large corpus of labeled images which show a skin condition. Such labeled images may be gathered from publicly available databases, datasets, and medical literature. The encoder may extract features from the image through one or more filters. The decoder is responsible for generating the final output which can be a segmentation mask containing the outline of the skin concern object(s). The outlined skin concern objects may then be used to classify the user skin concerns. The classified skin concerns 220 may then be used as an input to data analysis and recommendation engine 160 for generating personalized beauty product recommendations.” Furthermore, as cited in ¶¶ [0058] [0059] “FIG. 10 is a block diagram illustrating an exemplary architecture for the training of a beauty product recurrent neural network 1000, according to an embodiment. According to an embodiment, a recurrent neural network may be trained and utilized to generate personalized beauty product recommendations. The training of the beauty product RNN 1000 may be implemented using supervised learning techniques. The beauty product RNN 1000 may be trained using a subset of the obtained and pre-processed user responses 1001 and product information 1002. User responses 1001 may include fact-based inputs, concern-based inputs, preference-based inputs, and goal-based inputs as well as locational data extracted from the fact-based inputs. The subset of user responses 1001 may be fed into an encoder 1010 which may have one or more recurrent layers 1011 for embedding the input subset of user responses 1001 such that after the input data has been processed through the one or more recurrent layers the encoder 1010 may extract the input data's defining features and assign weights 1013 to the neurons existing within the hidden recurrent layers 1011. After passing the subset of user responses 1001 through encoder 1010, what is output is a user requirement vector 1021 existing within the encoded feature space 1020. The user requirement vector 1021 may comprise all of a given user's responses 1001 and environmental context extracted from locational data encoded into the requirement vector 1021 after passing through the one or more recurrent layers 1011. According to some embodiments, beauty product RNN 1000 may be configured with one or more attention 1012 mechanisms. Attention 1012 is a mechanism that may be combined with the beauty product RNN 1000 allowing it to focus on certain parts of the input when predicting a certain part of the output sequence, enabling easier learning and of higher quality…The subset of product information 1002 may be fed into an encoder 1010 which may have one or more recurrent layers 1015 for embedding the input subset of product information 1002 such that after the input data has been processed through the one or more recurrent layers the encoder 1010 may extract the input data's defining features and assign weights 1014 to the neurons existing within the hidden recurrent layers 1015. After passing the subset of product information 1001 through encoder 1010, what is output is a product vector 1022 existing within the encoded feature space 1020. According to some embodiments, beauty product RNN 1000 may be configured with one or more attention 1012 mechanisms. Attention 1012 is a mechanism that may be combined with the beauty product RNN 1000 allowing it to focus on certain parts of the input when predicting a certain part of the output sequence, enabling easier learning and of higher quality.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Lindgren’s generative AI model is trained on materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics; and wherein the generative AI model is trained on encoded materials promoting products and characteristics of the products to learn a relationship between the materials and the characteristics; and wherein the generative AI model includes a text encoder trained on a set of images labeled with text and an image encoder trained on salient visual features in the set of images labeled with descriptions of the salient visual features, such that when the one or more characteristics of the one or more trending looks is applied to the text encoder, the text encoder identifies a subset of images labeled with text that corresponds to the one or more characteristics, the image encoder identifies salient visual features within the subset of images that correspond to the one or more characteristics, and the generative AI model combines the salient visual features within the subset to create the beauty content to Lotti’s method for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends. One of ordinary skill in the art would have been motivated to do in order “for intelligent context-based personalized beauty product recommendation and matching which takes advantage of an individual's beauty-related needs, concerns, goals, and environment in order to more efficiently match the individual to beauty-related products.” (Lindgren: ¶ [0007]). Lotti and Lindgren do not explicitly disclose obtaining, by one or more processors, social media content describing one or more trending looks having a popularity metric above a popularity threshold; and wherein the beauty content is generated in real-time as the one or more trending looks are identified. However, James further discloses: obtaining, by one or more processors, social media content describing one or more trending looks having a popularity metric above a popularity threshold (i.e. obtaining social media content of trending beauty looks having a popularity metric over a threshold) (James: ¶¶ [0173] [0174] “In the workflow performed by the cloud platform, social media network personal accounts (influencers, most trending looks) may be scraped for data related to lipstick colors. The cloud platform may perform analyzing of one or more collected images to extract average make up color (lip, foundation, hair color) by using a deep learning algorithm to segment lip finishes of make up. For instance, the cloud platform may accomplish this by first detecting lips in a plurality of images using a known technique in the art ( such as that described in U.S. Pat. No. 5,805,745, which is incorporated herein by reference). The cloud platform may then perform comparisons of an extracted color with colors most liked by one or more communities of users while also taking into account the setup inputs of the user received from the user's smartphone device. Taking into account all of the collected data, the final step is for the cloud platform to send to the user the results of the analysis in the form of the above-noted selection of relevant looks…In the improvement process performed by the cloud platform and the smartphone app, the user can save her favorites looks and "like" the popular color to enrich the scraping algorithms for a relevant recommendation at a later time. The cloud platform can further aggregate all of the users' feedback, and the platform can send to new users the most trending area per localization.” Furthermore, as cited in ¶ [0272] “For instance, to enable users to create a very popular color that emerges as a result of the above-described games or challenges, the grouping may be to group the specific cartridges necessary to make the popular color into a purchasable package. In FIG. 42, in step 4202 data is collected on the top X most popular blended colors determined in the games or challenges. X may be an integer that is 1 or greater.”); and wherein the beauty content is generated in real-time as the one or more trending looks are identified (i.e. beauty product is generated in real-time as trending looks are identified in real-time) (James: ¶ [0069] “The cosmetics offerings-for foundation and liquid lipstick-will have the capability to incorporate real-time trend information as well as color-matching technology into its personalized product offerings as described below.” Furthermore, as cited in ¶ [0170] “Another mode may allow the user to match a lipstick color to their "look" based on selfie picture. In this example, the shade and finish selection on proposed picture is extracted. The user can virtually try on the lipstick in real time, the user can adjust the color presented. When the user is satisfied with the color, the user can touch a button displayed on the app to dispense the formula and an internal neural network will decompose the color requested into different color cartridge dose. After the recipe is sent to the dispenser and the lipstick shade is dispensed, the user can apply the lipstick.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add James’ obtaining social media content describing one or more trending looks having a popularity metric above a popularity threshold and the beauty content is generated in real-time as the one or more trending looks are identified to Lotti’s method for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends. One of ordinary skill in the art would have been motivated to do in order to “assess users' individual skin and local environmental data to create and deliver personalized, on-the-spot skincare and cosmetic formulas that optimize for increasing levels of personalization over time.” (James: ¶ [0060]). Lotti, Lindgren, and James do not explicitly disclose applying, by the one or more processors, the at least one beauty product and the one or more characteristics of the one or more trending looks to a generative artificial intelligence (AI) model to generate beauty content associated with the at least one beauty product including an image of a person having visual features resembling the one or more trending looks and a reference to the at least one beauty product which can be used in creating the one or more trending looks. However, Kwak further discloses applying, by the one or more processors, the at least one beauty product and the one or more characteristics of the one or more trending looks to a generative artificial intelligence (AI) model to generate beauty content associated with the at least one beauty product including an image of a person having visual features resembling the one or more trending looks and a reference to the at least one beauty product which can be used in creating the one or more trending looks (i.e. applying beauty product or facial makeup style corresponding to beauty influencer to an AI model to generate beauty product recommendations and an image of the person with the facial makeup style applied to the image) (Kwak: ¶¶ [0048] [0049] “In particular, image synthesis technology based on style transfer, which can synthesize the user image and the source image, can be used in virtual style synthesis processing. Here, the style transfer technology processed in the present invention may use the currently known BeautyGAN (instance-level facial makeup transfer with deep generative adversarial network) technology or PairedCycleGAN (asymmetric style transfer for applying and removing makeup) technology as is, or according to an embodiment of the present invention, a complex GAN learning model that combines the BeautyGAN and PairedCycleGAN technologies may be used…In the complex GAN learning model according to an embodiment of the present invention, the learning process itself corresponding to makeup style transfer may use the process of the PariedCycleGAN model and only a loss function part may be replaced with a function defined in BeautyGAN, thereby improving the overall synthesis performance and subjective image quality effect. This can improve the performance in transferring the style of the makeup image to a non-makeup image, and in particular, by the face warping process according to the embodiment of the present invention, the naturalness in the transfer based on the warped image can be improved.” Furthermore, as cited in ¶¶ [0058] [0059] “As described above, the image information corresponding to the source image may be acquired from various paths, for example, from makeup video information of beauty influencers with a certain number of subscribers or more. The image information corresponding to the source image may include video and image information crawled and extracted from various video upload sites such as Instagram, YouTube, Facebook, Tumblr, Twitch, Naver, and Kakao…For example, to collect video information, the source image collection unit 501 may crawl and capture a source person image (post-makeup image in an Internet beauty video showing before and after makeup) and a source person pre-makeup image, respectively. The style database 600 may be constructed through neural network learning using images subjected to face warping on the captured images by the face warping processing unit 500.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Kwak’s applying, by the one or more processors, the at least one beauty product and the one or more characteristics of the one or more trending looks to a generative artificial intelligence (AI) model to generate beauty content associated with the at least one beauty product including an image of a person having visual features resembling the one or more trending looks and a reference to the at least one beauty product which can be used in creating the one or more trending looks to Lotti’s method for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends. One of ordinary skill in the art would have been motivated to do so because “when a user inputs only his/her face image, synthesis images can be recommended according to image transfer with an appropriate source image based on virtual style synthesis and face analysis in which face warping is performed and transfer intensity is adjusted, and related makeup style information can be provided, thereby allowing a user to conveniently select makeup styles.” (Kwak: ¶ [0016]). With respect to Claim 8: All limitations as recited have been analyzed and rejected to claim 1. Claim 8 recites “A computing device for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends, the computing device comprising: one or more processors; and a non-transitory computer-readable medium storing instructions thereon that, when executed by the one or more processors, cause the computing device to:” (Lotti: Col. 38 Lines 53-66) perform the steps of method claim 1. Claim 8 does not teach or define any new limitations beyond claim 1. Therefore it is rejected under the same rationale. With respect to Claim 3: Lotti teaches: The method of claim 1, wherein the beauty content promotes the one or more characteristics described in the [[social]] media content (i.e. beauty product promotes characteristic or facial feature described in the media content) (Lotti: Col. 31 Lines 1-16 “At block 297, processing logic obtains, from the generative machine learning model, an output identifying a subset of a plurality of beauty products related to the facial feature. In some embodiments, the subset of beauty products includes one or more beauty products identified by the generative machine learning model in the beauty products database. The one or more beauty products may be related to the facial feature described by the textual identifier. At block 298, processing logic provides an indication of at least one beauty product of the subset of the plurality of beauty products for display at a GUI. In some embodiments, the subset of the plurality of beauty products is filtered, based on one or more criteria, to obtain a sub-subset of beauty products. In some embodiments, an indication of the sub-subset of beauty products is provided for display at the GUI.”). Lotti and Lindgren do not explicitly disclose wherein the beauty content promotes the one or more characteristics described in the social media content. However, James further discloses wherein the beauty content promotes the one or more characteristics described in the social media content (i.e. product vectors comprising materials of beauty products and characteristics of beauty products such as ingredients and functions of ingredients is used to train the machine learning model) (James: ¶ [0070] “Using the lipstick system, consumers will be able to create liquid lipstick based on their personal skintone and preferences. The system can shade-match a user's clothing or accessories, or they can even opt to create a particular color that is trending on social media. The device will have three cartridges; collectively, these cartridges will have the capability to create hundreds of shades.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add James’ beauty content promotes the one or more characteristics described in the social media content to Lotti’s method for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends. One of ordinary skill in the art would have been motivated to do in order to “assess users' individual skin and local environmental data to create and deliver personalized, on-the-spot skincare and cosmetic formulas that optimize for increasing levels of personalization over time.” (James: ¶ [0060]). With respect to Claim 10: All limitations as recited have been analyzed and rejected to claim 3. Claim 10 does not teach or define any new limitations beyond claim 3. Therefore it is rejected under the same rationale. With respect to Claim 4: Lotti teaches: The method of claim 1, wherein the beauty content depicts a look matching one of the trending looks in the [[social]] media content and indicates that the at least one beauty product is used to create the look (i.e. beauty product/content matches the look for the facial feature, wherein the beauty product is used to create the desired or trending look corresponding to the facial feature) (Lotti: Cols. 30-31 Lines 61-16 “The generative machine learning model may be trained to output indications of one or more beauty products related to the facial feature described by the textual identifier. In some embodiments, the generative machine learning model may output an identification of a beauty product related to the facial feature described by the textual identifier based on the input prompt. At block 297, processing logic obtains, from the generative machine learning model, an output identifying a subset of a plurality of beauty products related to the facial feature. In some embodiments, the subset of beauty products includes one or more beauty products identified by the generative machine learning model in the beauty products database. The one or more beauty products may be related to the facial feature described by the textual identifier. At block 298, processing logic provides an indication of at least one beauty product of the subset of the plurality of beauty products for display at a GUI. In some embodiments, the subset of the plurality of beauty products is filtered, based on one or more criteria, to obtain a sub-subset of beauty products. In some embodiments, an indication of the sub-subset of beauty products is provided for display at the GUI.” Furthermore, as cited in Col. 19 Lines 45-58 “In some embodiments, the user may provide user preferences 222 via the client device 110. In some embodiments, the user preferences 222 can be stored at a data store and associated with the user for later retrieval and use. In some embodiments, user preferences 222 may include substantially non-detectable attributes of the user ( e.g., non-detectable based on the 2D image 220). In some embodiments, user preferences 222 may include characteristics such as the user's preferred or desired style, desired beauty products color, etc. For example, a user preference can include a bold style rather than a natural look style as pertaining to applied beauty products. In some embodiments, user preferences 222 includes other user attributes such as age and/or lifestyle, etc.”). Lotti and Lindgren do not explicitly disclose wherein the beauty content depicts a look matching one of the trending looks in the social media content and indicates that the at least one beauty product is used to create the look. However, James further discloses wherein the beauty content depicts a look matching one of the trending looks in the social media content and indicates that the at least one beauty product is used to create the look (i.e. beauty content depicts a look matching a popular look in the social media content and indicates the beauty product color used to make look) (James: ¶ [0161] “FIG. 17 shows the above-described ecosystem (1700) that is built on proposing a trending lipstick color to the consumer after having analyzed trends on social media by combining favorite colors taste, geolocation, favorite influencers, past selection and likes. It gives the opportunity to the consumer to pick a color based on a look, virtually try it and adjust it if necessary to finally produce the formula on the spot with a connected dispenser. It is also possible to propose a color based on the user's outfit digitalized with a selfie picture. The consumer can save the most favorite colors and share it with his virtual community.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add James’ beauty content depicts a look matching one of the trending looks in the social media content and indicates that the at least one beauty product is used to create the look to Lotti’s method for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends. One of ordinary skill in the art would have been motivated to do in order to “assess users' individual skin and local environmental data to create and deliver personalized, on-the-spot skincare and cosmetic formulas that optimize for increasing levels of personalization over time.” (James: ¶ [0060]). With respect to Claim 11: All limitations as recited have been analyzed and rejected to claim 4. Claim 11 does not teach or define any new limitations beyond claim 4. Therefore it is rejected under the same rationale. With respect to Claim 5: Lotti does not explicitly disclose the method of claim 1, wherein obtaining social media content describing one or more trending looks includes: obtaining, by the one or more processors, a plurality of posts, images, or videos on social media platforms each describing a look; and identifying, by the one or more processors, one or more looks described in the plurality of posts, images, or videos having the popularity metric above the popularity threshold. However, Lindgren further discloses: obtaining, by the one or more processors, a plurality of posts, images, or videos on social media platforms each describing a look (i.e. obtaining posts/images on social media platforms describing a look such as user-uploaded image of themself) (Lindgren: ¶ [0041] “System user information may include, but is not limited to, age, gender, physical address and/or other user location description (e.g., a zip code or geographical region, billing information, etc.), email address, social media handle or username, purchase history, shopping cart inventory data, webpage views, online interactions, user product or service reviews, user recommendation system reviews, social media data ( e.g., likes, dislikes, mentions, product and/or company subscriptions, etc.), fact-based input data (e.g., hair type, hair porosity, hair texture, skin type, dark spots, acne, hyperpigmentation, allergies, beauty routine, beauty product preference(s), etc.), a user-uploaded photograph, concern-based input data ( e.g., fine lines and wrinkles, loss of skin elasticity, thinning hair, damaged hair, sun damage to skin, etc.), preference-based input, and goal-based input data (e.g., radiant and youthful, thermal protection hair, volumize hair, etc.).”); and identifying, by the one or more processors, one or more looks described in the plurality of posts, images, or videos having the popularity metric above the popularity threshold (i.e. identifying top beauty products above having a similarity score above a threshold) (Lindgren: ¶ [0045] “All available products may be represented by a vector and a similarity score of products may be calculated between all product vectors and the requirement vector using similarity calculator 164. The similarity score may be calculated by a variety of methods such as cosine similarity and/or Euclidian distance and/or other similarity metrics known to those skilled in the art. The three to four top products with the highest similarity score in different product categories can be presented to the customer on the end user device 170 as personalized recommendations.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Lindgren’s obtaining, by the one or more processors, a plurality of posts, images, or videos on social media platforms each describing a look; and identifying, by the one or more processors, one or more looks described in the plurality of posts, images, or videos having the popularity metric above the popularity threshold to Lotti’s method for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends. One of ordinary skill in the art would have been motivated to do in order “for intelligent context-based personalized beauty product recommendation and matching which takes advantage of an individual's beauty-related needs, concerns, goals, and environment in order to more efficiently match the individual to beauty-related products.” (Lindgren: ¶ [0007]). With respect to Claim 12: All limitations as recited have been analyzed and rejected to claim 5. Claim 12 does not teach or define any new limitations beyond claim 5. Therefore it is rejected under the same rationale. With respect to Claim 6: Lotti teaches: The method of claim 1, wherein applying the at least one beauty product to the generative AI model to generate the beauty content includes: applying, by the one or more processors, text describing the at least one beauty product to the text encoder configured to map the text to a subset of the materials promoting products which were used to train the generative AI model (i.e. prompt includes textual identifier comprising beauty products, wherein the textual identifier is used to train the machine learning ai by mapping facial features to beauty products) (Lotti: Col. 30 Lines 47-67 “At block 295, processing logic generates a prompt that describes the information related to at least some of the plurality of beauty products and information identifying the textual identifier. In some embodiments, the prompt includes textual information associated with the textual identifier and the information relevant to the textual identifier identified in the database (e.g., at block 294). At block 296, processing logic provides, to a generative machine learning model, the prompt including information identifying the textual identifier and contextual information such as relevant information identified from the database. In some embodiments, information associated with the textual identifier is provided to a generative machine learning model (e.g., generative machine learning model 250 of FIG. 2A). The generative machine learning model may be trained to output indications of one or more beauty products related to the facial feature described by the textual identifier. In some embodiments, the generative machine learning model may output an identification of a beauty product related to the facial feature described by the textual identifier based on the input prompt.”); identifying, by the one or more processors via the image encoder, salient visual features of the subset which are related to the text (i.e. identify the facial features or salient visual features via image encoder/analysis) (Lotti: Col. 30 Lines 24-38 “At block 293, processing logic determines (e.g., generates, etc.), based on the 2D image data, a textual identifier that describes a facial feature of the human face. In some embodiments, the textual identifier is determined based at least in part on the 3D model. The textual identifier may include textual information such as information described herein above with respect to FIG. 2A. In some embodiments, processing logic determines a textual identifier by using a generative machine learning model ( e.g., generative machine learning model 234 of FIG. 2A) that generates an output identifying the textual identifier based on the 2D image data. In some embodiments, processing logic determines a textual identifier by using a discriminative machine learning model ( e.g., discriminative machine learning model 236 of FIG. 2A).”); and combining, by the one or more processors, the salient visual features of the subset to generate the beauty content (i.e. identifying beauty product/content that matches the facial features or salient visual features) (Lotti: Cols. 30-31 Lines 61-16 “The generative machine learning model may be trained to output indications of one or more beauty products related to the facial feature described by the textual identifier. In some embodiments, the generative machine learning model may output an identification of a beauty product related to the facial feature described by the textual identifier based on the input prompt. At block 297, processing logic obtains, from the generative machine learning model, an output identifying a subset of a plurality of beauty products related to the facial feature. In some embodiments, the subset of beauty products includes one or more beauty products identified by the generative machine learning model in the beauty products database. The one or more beauty products may be related to the facial feature described by the textual identifier. At block 298, processing logic provides an indication of at least one beauty product of the subset of the plurality of beauty products for display at a GUI. In some embodiments, the subset of the plurality of beauty products is filtered, based on one or more criteria, to obtain a sub-subset of beauty products. In some embodiments, an indication of the sub-subset of beauty products is provided for display at the GUI.”). With respect to Claim 13: All limitations as recited have been analyzed and rejected to claim 6. Claim 13 does not teach or define any new limitations beyond claim 6. Therefore it is rejected under the same rationale. With respect to Claim 7: Lotti teaches: The method of claim 1, wherein the one or more characteristics of the one or more trending looks include at least one of: […], a color, a hairstyle, a beneficial effect, […] a scent, […] (i.e. characteristics of beauty products include color, hairstyle that beauty product provides, beneficial treatments of the beauty product, and perfume or scent of beauty product) (Lotti: Col. 19 Lines 51-56 “In some embodiments, user preferences 222 may include characteristics such as the user's preferred or desired style, desired beauty products color, etc. For example, a user preference can include a bold style rather than a natural look style as pertaining to applied beauty products.” Furthermore, as cited in Col. 7 Lines 6-31 “A beauty product can refer to any substance or item designed for use on the body, particularly the face, skin, hair, and nails, often with the purpose of enhancing and/or maintaining beauty and appearance. Beauty products can often be part of personal care and grooming routines, and can serve various functions, such as cleansing, moisturizing, styling, and embellishing. Beauty products include, but are not limited to, skincare products such as cleansers, moisturizers, serums, toners, or other products designed to care for the skin and/or address specific skin concerns. Beauty products can include haircare product, such as shampoos, conditioners, hair masks, styling products (e.g., hair wax, hair spray, etc.), and treatments often designed to clean, nourish, and/or style the hair (e.g., hair cutting and styling, etc.). Beauty products can include cosmetics, such as foundation, lipstick, eyeshadow, mascara, eyeliner, bronzer, or other items often applied to enhance facial features and/or create different "looks." Beauty products can include nail care products, such as nail polish, nail polish remover and/or other products that can help maintain healthy and/or attractive nails. Beauty products can include fragrance products such as perfumes and colognes designed to add or enhance the scent of the body or user. Beauty products can include personal care products such as deodorants, body lotions, shower gels, or other products designed to maintain personal hygiene.”). Lotti does not explicitly disclose the method of claim 1, wherein the one or more characteristics of the one or more beauty products or looks include at least one of: an ingredient, a color, a hairstyle, a beneficial effect of one of the beauty products, a scent, a chemical property, a chemical composition, or an acidity level. However, Lindgren further discloses wherein the one or more characteristics of the one or more beauty products or looks include at least one of: an ingredient, […], a chemical property, a chemical composition, or an acidity level (i.e. characteristics of beauty products include ingredients, chemical properties and composition, and acidity level including salicylic acid concentration) (Lindgren: Fig. 4 and ¶ [0044] “Product database(s) 150 may include detailed product information, such as the non-limiting examples of ingredients, functions of the ingredients, allergens, sourcing information (e.g., fair trade, ethically sourced, etc.), benefits, use, ingredient free data and the like. According to an embodiment, product information may be vectorized such that each product in the product database(s) 150 has a unique vector representation.” Furthermore, as cited in ¶ [0045] “For example, a customer may have indicated that he is concerned about clogged pores and prefers organic products, and the recommendation engine 160 can differentiate between products that have chemical ingredients versus organic ingredients and would recommend to the customer a product with organic ingredients that can alleviate clogged pores.” Furthermore, as cited in ¶ [0048] “For example, in the event the individual profile 120 indicates the presence of oily skin, acne, scarring, lives in a tropical climate and prefers gluten free products, the recommendation engine 160 via the neural network 163 match the individual with appropriate cleansers with salicylic acid to address acne, toner with aloe for calming and soothing properties, lightweight moisturizer with sunscreen to reflect the tropical climate and includes salicylic acid for acne, with all product recommendations being gluten free.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Lindgren’s one or more characteristics of the one or more beauty products or looks include at least one of: an ingredient, a color, a hairstyle, a beneficial effect of one of the beauty products, a scent, a chemical property, a chemical composition, or an acidity level to Lotti’s method for implementing generative artificial intelligence to provide beauty content in accordance with beauty trends. One of ordinary skill in the art would have been motivated to do in order “for intelligent context-based personalized beauty product recommendation and matching which takes advantage of an individual's beauty-related needs, concerns, goals, and environment in order to more efficiently match the individual to beauty-related products.” (Lindgren: ¶ [0007]). With respect to Claim 14: All limitations as recited have been analyzed and rejected to claim 7. Claim 14 does not teach or define any new limitations beyond claim 7. Therefore it is rejected under the same rationale. With respect to Claim 21: Lotti teaches: The method of claim 1, wherein the generative Al model utilizes a discriminator to compare the beauty content to machine-generated beauty content and human-generated beauty content, and determine that the beauty content is satisfactory in response to determining that the beauty content shares more similarities with the human-generated beauty content than the machine-generated beauty content (i.e. utilizes processing logic to compare machine generated beauty content and beauty content via text identifiers or human-generated to determine that the similarities satisfy a threshold level of confidence) (Lotti: Col. 5 Lines 32-43 “In some embodiments, the generative machine learning model (e.g., the generative machine learning model used to identify a subset of beauty products) may be trained with a training set that includes training input that identifies multiple groups of textual identifiers where each group describes one or more facial features of a human face (and/or relationships thereof). The training output of the generative machine learning model can be compared to target output including a subset of the beauty products that corresponds to a respective group of the textual identifiers. The parameters (e.g., values thereof) of the generative machine learning model can be adjusted based on the comparison.” Furthermore, as cited in Col. 41 Lines 36-55 “At operation 803, processing logic determines whether the level of confidence that the textual identifier corresponds to the landmark on the 3D model satisfies a threshold level of confidence. If the level of confidence that the textual identifier corresponds to the landmark on the 3D model does not satisfy the threshold level of confidence, processing logic returns to operation 801. If the level of confidence that the textual identifier corresponds to the landmark on the 3D model does satisfy the threshold level of confidence, processing logic proceeds to operation 804. In some embodiments, processing logic determines whether the level of confidence that textual identifier corresponding to landmarks on the 3D model satisfies a threshold level of confidence. If the level of confidence that the textual identifier corresponding to the landmarks on the 3D model does not satisfy the threshold level of confidence, processing logic returns to operation 801. If the level of confidence that the textual identifier corresponding to the landmarks on the 3D model does satisfy the threshold level of confidence, processing logic proceeds to operation 804.”). Response to Arguments Applicant’s arguments see pages 7-9 of the Remarks disclosed, filed on 02/06/2026, with respect to the 35 U.S.C. § 101 rejection(s) of claim(s) 1-14 and 21 have been considered but are not persuasive. The Applicant asserts “Even assuming arguendo that the claims recite a judicial exception, the claims are not directed to an abstract idea under Prong Two, because the claims as a whole integrate the judicial exception into a practical application of the exception by improving a technical field. More specifically, the claims improve content generation by "utiliz[ing] a structured approach to content generation that leverages pre-trained models and encoded materials. This approach allows for the efficient storage and retrieval of data, as well as the dynamic generation of content without the need for storing large volumes of pre-generated content." Applicant's specification at par. 21. Additionally, "by leveraging a GAN that includes a text encoder and an image encoder, the system efficiently associates text with images or videos, optimizing the way content is processed and generated" Id. at par. 20. Applicant's specification at par. 22 further explains, "The generative Al model, including a text encoder and an image encoder, plays a pivotal role in associating text with images or videos and identifying salient visual features that correspond to the text descriptions of trending beauty products. This model facilitates the creation of beauty content that not only promotes specific characteristics of beauty products but also depicts looks that match those found in the media content, indicating how the products can be used to achieve these looks." As in Ex parte Desjardins, the improvements are reflected in the claims. More specifically, claim 1 is amended to recite, in part, "the generative AI model includes a text encoder trained on a set of images labeled with text and an image encoder trained on salient visual features in the set of images labeled with descriptions of the salient visual features, such that when the one or more characteristics of the one or more trending looks is applied to the text encoder, the text encoder identifies a subset of images labeled with text that corresponds to the one or more characteristics, the image encoder identifies salient visual features within the subset of images that correspond to the one or more characteristics, and the generative AI model combines the salient visual features within the subset to create the beauty content." Thus, the claims improve content generation by using a text encoder and an image encoder to optimize the way content is processed and generated. Therefore, the claims integrate the alleged abstract idea into a practical application by improving a technical field. For at least these reasons, the claims are directed to statutory subject matter. Therefore, Applicant respectfully requests the rejection of claims 1, 3-8, 10-14 and 21 under 35 U.S.C. § 101 be withdrawn.” The Examiner respectfully disagrees. The Examiner would like to note that the text and image encoder that is used to train the AI model recited at a high level in the claims. Furthermore, according to ¶ [0010] of U.S. Publication 2008/0050047 to Bashyam discloses that it is conventional or well-known to apply image/text encoding schemes to an adaptive prediction model; “Huffman encoding is one well known species among the various encoding techniques that may be used during compression. For example, the industry standard JPEG image compression algorithm employs Huffman encoding on DCT coefficients (Discrete Cosine Transform factors) extracted from a to-be-compressed input image (typically, a YCrCb coded image). While Huffman encoding may perform well in some instances, the encoding technique of choice for variable length and/or limited length entropy encoding is known as arithmetic encoding. Arithmetic encoding (ARI for short) relies on the maintaining of a running history of recently received un-compressed values (alphabet characters or symbols) and on the maintaining of a fixed or variable prediction model that indicates with fairly good accuracy what next un-compressed value (character or symbol) is most likely to appear in a sampled stream of input data given an input history of finite length. A seminal description of arithmetic encoding may be found in U.S. Pat. No. 4,122,440 issued Oct. 24, 1978 to Langdon, Jr., et al. A more modern example may be found in U.S. Pat. No. 6,990,242 issued Jan. 24, 2006 to Malvar. The latter provides a background explanation regarding a conventional arithmetic encoding scheme and how it may be coupled with an adaptive predicting model.” Therefore, the rejection(s) of claim(s) 1, 3-8, 10-14, and 21 under 35 U.S.C. § 101 is maintained above with an updated analysis. Applicant’s arguments see pages 9-11 of the Remarks disclosed, filed on 02/06/2026, with respect to the 35 U.S.C. § 103 rejection(s) of claim(s) 1-14 and 21 over Lotti in view of Lindgren and James have been considered but are moot because the arguments do not apply to the new ground(s) of rejection is made in further view of U.S. Publication 2024/0346774 to Kwak. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited to further show the state of the art: U.S. Publication 2020/0315322 to Bowyer for disclosing Systems and techniques for a beauty creation platform are discussed herein. A beauty product configuration user interface may be generated. The beauty product configuration user interface includes a first display area and a second display area. A set of selectable product user interface elements may be displayed in the second display area. The beauty product configuration user interface includes a first display area and a second display area. A set of selectable product user interface elements may be displayed in the second display area. A selection may be received of a selectable product user interface element of the set of selectable product user interface elements. In response to receipt of the selection, a visual representation of a configurable product represented by the selected product user interface element may be displayed in the first display area. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Azam Ansari, whose telephone number is (571) 272-7047. The examiner can normally be reached from Monday to Friday between 8 AM and 4:30 PM. If any attempt to reach the examiner by telephone is unsuccessful, the examiner's supervisor, Waseem Ashraf, can be reached at (571) 270-3948. Another resource that is available to applicants is the Patent Application Information Retrieval (PAIR). Information regarding the status of an application can be obtained from the (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAX. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pairdirect.uspto.gov. Should you have questions on access to the Private PAIR system, please feel free to contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Applicants are invited to contact the Office to schedule either an in-person or a telephonic interview to discuss and resolve the issues set forth in this Office Action. Although an interview is not required, the Office believes that an interview can be of use to resolve any issues related to a patent application in an efficient and prompt manner. /AZAM A ANSARI/ Primary Examiner, Art Unit 3621 March 9, 2026
Read full office action

Prosecution Timeline

Jun 12, 2024
Application Filed
Jul 25, 2025
Examiner Interview (Telephonic)
Aug 06, 2025
Non-Final Rejection — §101, §103
Aug 27, 2025
Examiner Interview Summary
Aug 27, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Response Filed
Oct 16, 2025
Final Rejection — §101, §103
Nov 10, 2025
Request for Continued Examination
Nov 19, 2025
Response after Non-Final Action
Nov 25, 2025
Non-Final Rejection — §101, §103
Dec 02, 2025
Response after Non-Final Action
Feb 05, 2026
Examiner Interview Summary
Feb 05, 2026
Applicant Interview (Telephonic)
Feb 06, 2026
Response Filed
Mar 12, 2026
Final Rejection — §101, §103
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591892
SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR EARLY DETECTION OF A MERCHANT DATA BREACH THROUGH MACHINE-LEARNING ANALYSIS
2y 5m to grant Granted Mar 31, 2026
Patent 12499471
AUTOMATICALLY GENERATING A RETAILER-SPECIFIC BRAND PAGE BASED ON A MACHINE LEARNING PREDICTION OF ITEM AVAILABILITY
2y 5m to grant Granted Dec 16, 2025
Patent 12469042
SYSTEM FOR GENERATING A NON-FUNGIBLE TOKEN INCLUDING MUTABLE AND IMMUTABLE ATTRIBUTES AND RELATED METHODS
2y 5m to grant Granted Nov 11, 2025
Patent 12423918
AUGMENTED REALITY IN-APPLICATION ADVERTISEMENTS
2y 5m to grant Granted Sep 23, 2025
Patent 12417468
USER ENGAGEMENT MODELING FOR ENGAGEMENT OPTIMIZATION
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
48%
Grant Probability
98%
With Interview (+49.7%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month