DETAILED ACTION
Status of Claims
This is a final action in reply to the response filed on March 10, 2026.
Claims 1, 3 and 6 have been amended.
Claim 2 has been cancelled.
Claims 1 and 3-7 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action.
The rejection of claims 1 and 3-7 under 35 USC § 112(b) is withdrawn in light of Applicant’s amendments and arguments
The rejection of claims 1 and 3-7 under 35 USC § 101 is maintained. Please see the Response to Applicant’s arguments.
Claim Rejections- 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 3-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. In adhering to the 2019 PEG, Step 1 is directed to determining whether or not the claims fall within a statutory class. Herein, claims 1 and 3-7 falls within statutory class of a machine. Hence, the claims qualify as potentially eligible subject matter under 35 U.S.C §101. With Step 1 being directed to a statutory category, the 2019 PEG flowchart is directed to Step 2. Step 2 is the two-part analysis from Alice Corp. (also called the Mayo test). The 2019 PEG makes two changes in Step 2A: It sets forth new procedure for Step 2A (called “revised Step 2A”) under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. The two-prong inquiry is as follows: Prong One: evaluate whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). If claim recites an exception, then Prong Two: evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception. The claim(s) recite(s) the following abstract idea indicated by non-boldface font and additional limitations indicated by boldface font:
Claim 1:
A server system for supplementing training data, the server system configured to instantiate a backend application instance configured to communicably couple to a frontend application instance instantiated by an end-user computing device, the server system comprising:
a database service storing training data, the training data defined in part by entries in a table, each entry comprising:
a set of properties comprising:
demographic data of a prior user of the server system;
aromatic note preferences of the prior user; and
aromatic note aversions of the prior user; and
a set of values each corresponding to a respective one fragranced product, each respective value defined by one of:
a purchase by the prior user of the respective one fragranced product;
or a review by the prior user of the respective one fragranced product;
a memory resource storing instructions for instantiating the backend application instance;
a processing resource configured to cooperate with the memory resource to execute the instructions to instantiate the backend application instance, the backend application instance configured to:
instantiate a neural network instance as a trained machine learning model, the trained machine learning model trained from the training data stored by the database service and configured to assign a value to each of a set of fragranced products in response to receiving input data provided by a user of the frontend application instance;
receive first input data from the frontend application instance communicable coupled to the backend application instance, the first input data comprising:
demographic data of the user;
aromatic note preferences of the user; and
aromatic note aversions of the user;
provide the received input data to the trained machine learning model as input;
receive from the trained machine learning model, a set of values each value corresponding to a predicted review of one respective fragranced product by the user;
select a group of fragranced products corresponding to a subset of the set of values satisfying a threshold; and
randomly select a random fragranced product different from the selected group of fragranced products, the selected group of fragranced products and the random fragrance product defining a sampling kit; and
transmit fragranced product selection data to cause the sampling kit to be assembled with the selected group of fragranced products and a random fragranced product and to cause the assembled sampling kit to be provided to the user.
Per Prong One of Step 2A, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity. Particularly, the identified recitation falls within Mental Processes, concepts performed in the human mind including observations, evaluation, judgement and opinion and Certain Methods of Organizing Human Activity such as commercial interactions, including advertising, marketing or sales activities or behaviors, business relations. Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance is recited at a high level of generality, i.e., as a generic computing and processing system. This central electronic computing device, local electronic computing device and the further electronic computing device is no more than mere instructions to apply the exception using a generic computing devices each comprising at least a processor, memory and display device. The trained machine learning model/neural network instance is used as a tool, in its ordinary capacity, to carry out the abstract idea. Further, processor configured to cause receiving/determining/transmitting data is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, since the claims are directed to the determined judicial exception in view of the two prongs of Step 2A, the 2019 PEG flowchart is directed to Step 2B. Therein, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, executing all the steps/functions by a user/service subsystem is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance type structure at paragraphs 0058-0059: “The client device 104 can be any electronic device and can include a processor 104 a, volatile or non-volatile memory 104 b, a display 104 c, and an input sensor or an input device 104 d. .” Paragraph 0060: “the host server 102 leverages one or more processor allocations or processing resources—all configurations of which necessarily implicate physical hardware—to load, from a non-transitory memory allocation or resource, an executable asset(s). The processor allocation can cooperate with the memory allocation to instantiate an instance of backend software configured to provide an interface with which corresponding frontend instance of software can communicate. .” Paragraph 0040: “the model—which may be a neural network, as one example—”. See also figure 1.
Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine for performing the present claims); and receiving or transmitting data (e.g., the present claims). The dependent claims 3-7 do not cure the above stated deficiencies, and in particular, the dependent claims further narrow the abstract idea without reciting additional elements that integrate the exception into a practical application of the exception or providing significantly more than the abstract idea. Claim 3 further limit the abstract idea that backend application instance is configured to receive purchase information indicating the user completed a purchase from the group of fragranced products selected by the machine learning model (a more detailed abstract idea remains an abstract idea). Claim 4 further limit the abstract idea that the backend application instance is configured to: create, with the database service, a new entry in the training data comprising: the demographic data of the user; the aromatic note preferences of the user; the aromatic note aversions of the user; and a value satisfying the threshold in respect of the purchased fragranced product (a more detailed abstract idea remains an abstract idea). Claim 5 further limit the abstract idea that the new entry in the training data comprises: a neutral value corresponding to a set of remaining fragranced products different from the group of fragranced products selected by the machine learning model, the neutral value corresponding to a neutral review score (a more detailed abstract idea remains an abstract idea). Claim 6 further limit the abstract idea that the new entry in the training data comprises: a negative-sentiment value corresponding to a set of remaining fragranced products different from the group of fragranced products selected for purchase by the user, the negative-sentiment value corresponding to a negative review score (a more detailed abstract idea remains an abstract idea). And claim 7 further limit the abstract idea that the backend application instance is configured to retrain the trained machine learning model from the updated training data (a more detailed abstract idea remains an abstract idea). The identified recitation of the dependents claims falls within the Mental Processes, concepts performed in the human mind including observations, evaluation, judgement and opinion and Certain Methods of Organizing Human Activity such as commercial interactions, including advertising, marketing or sales activities or behaviors, business relations. Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Thus, viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Response to Arguments
Applicant's arguments filed on 3/10/2026 have been fully considered but they are not persuasive.
With regard to the 35 U.S.C. 101 rejection, Applicant argues that (1) “Claim 1 does not recite a mental process”; (2) “Claim 1 does not recite a method of organizing human activity”; (3) “Step 2A Prong Two: Claim 1 is clearly integrated into a practical application” and (4) “Step 2B: Claim 1 provides an inventive concept” (Remarks, pages 6-11).
With regard to the 35 U.S.C. 103 rejections, Applicant argues specifically that the prior art of record that (5) “Neither Shaya nor Lee disclose such a system, nor does a hypothetical modification of Shaya in view of the teachings of Lee result in the claimed system.” And (6) “It is entirely unclear how the teachings of Lee, requiring physical contact with a user, can or should be incorporated into the internet-based system of Shaya to improve the disclosed product- selection system of Shaya.” (Remarks, pages 11-13).
In response to Applicant’s arguments (1) and (2). Examiner respectfully disagrees. Claim 1 recites a system for supplementing training data. A database store the training data such as a set of properties: demographic data of a prior user, aromatic note preferences and aversions of the prior user and a set of values corresponding to a respective one fragrance product such as a purchase by a prior user or a review by the prior user of the respective one fragrance product. A trained machine learning model i.e., a neural network, assigned a value to each of a set of fragranced products based on input of a user, inputs such as demographic data, aromatic note preferences and aversions. A set of values are received from the trained machine learning model as a predicted review of one respective fragrance product by the user. Based on a threshold, a subset of the set of values that satisfy it, a group of fragrances is selected. A sampling kit is assembled which contains random fragrance products and the selected fragrance product and it is provided to a user as described in Applicant's disclosure in paragraph 0031 "identifying long-term aroma preferences and aversions as described herein can take the form of a server-client instance pair configured for recommending fragranced products to users. In particular, such a system can leverage a predictive model (e.g., a supervised trained model) to generate fragrance recommendations based on user input data." Therefore, claim 1 recites an abstract idea falling within the Guidance's subject-matter grouping to the group of Mental Processes, concepts performed in the human mind including observations(demographic data of prior user and user, aromatic notes preferences and aversions for prior user and user, purchases or reviews of products from the prior user), evaluation (predicted review of values for each respective fragranced product), judgement (selecting a group of fragrances based on a threshold) and opinion (sampling kit with random fragranced products and the selected group of fragranced product) and Certain Methods of Organizing Human Activity such as commercial or legal interactions including advertising, marketing or sales activities or behaviors, business relations such as product inputs from user and sampling kits with fragrance products for review/feedback as new product inputs. In addition, Example 39 does not recite any of the judicial exceptions enumerated in the 2019 PEG because the claim does not recite any mathematical relationships, formulas, or calculations. While some of the limitations may be based on mathematical concepts, the mathematical concepts are not recited in the claims. Further, the claim does not recite a mental process because the steps are not practically performed in the human mind. Finally, the claim does not recite any method of organizing human activity such as a fundamental economic concept or managing interactions between people. Example 39 describe a method of training a neural network for facial detection, where each training set result and data i.e., first and second training set is used to train the neural network, the neural network in Example 39 is trained with the first and second training set. Claim 1 recites that an input from the user is provided to the trained machine learning model and the trained machine learning model assign a value to each of a set of fragranced product in response to the user input. Applicant’s claims does not describe that the machine learning model/neural network instance is re-trained. Further, model retraining enables the model in production to make the most accurate predictions with the most up-to-date data. Model retraining does not change the parameters and variables used in the model. It adapts the model to the current data so that the existing parameters give healthier and up-to-date outputs. The rejection is maintained.
In response to Applicant’s argument (3). Examiner respectfully disagrees. Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of receiving input data from a prior user and a user /determining values and analysis based on threshold/transmitting recommendations as a sampling kit of fragrances. This generic processor/memory/server limitation is no more than mere instructions to apply the exception using a generic computer component. Considering the claims as a whole, these additional limitations merely add generic computer activities i.e., receiving/determining/transmitting. The recited server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance, merely links the abstract idea to a computer environment. In this way, the server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance involvement is merely a field of use which only contributes nominally and insignificantly to the recited system, which indicates absence of integration. Claim 1 uses the server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance as a tool, in its ordinary capacity, to carry out the abstract idea. As to this level of computer involvement, mere automation of manual processes using generic computers does not necessarily indicate a patent-eligible improvement in computer technology. Considered as a whole, the claimed method does not improve the functioning of the computer itself or any other technology or technical field. Further, a processor configured to cause receiving/determining/transmitting data to a device is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The rejection is maintained.
In response to Applicant’s argument (4). Examiner respectfully disagrees. The additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, executing all the steps/functions by a user/service subsystem is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic server comprising a database service, memory resource and processing resource, end-user computing device and a trained machine learning model/neural network instance type structure at paragraphs 0058-0059: “The client device 104 can be any electronic device and can include a processor 104 a, volatile or non-volatile memory 104 b, a display 104 c, and an input sensor or an input device 104 d. .” Paragraph 0060: “the host server 102 leverages one or more processor allocations or processing resources—all configurations of which necessarily implicate physical hardware—to load, from a non-transitory memory allocation or resource, an executable asset(s). The processor allocation can cooperate with the memory allocation to instantiate an instance of backend software configured to provide an interface with which corresponding frontend instance of software can communicate. .” Paragraph 0040: “the model—which may be a neural network, as one example”. See also figure 1.
Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine for performing the present claims); and receiving or transmitting data (e.g., the present claims). In addition, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f) for more information about mere instructions to apply an exception. As per MPEP 2106.05 (a) II. Improvements to any other technology or technical field please see the examples that the courts have indicated may not be sufficient to shown an improvement to technology such as gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48. The rejection is maintained.
In response to Applicant’s argument (5). Examiner respectfully disagrees. Please see the updated rejection below as necessitated by amendments.
In response to Applicant’s argument (6). Examiner respectfully disagrees. Shaya as explained below describes the elements of the claimed system. Shaya is silent with regard that the inputs are related to aromatic note preferences and aversions of fragranced products. Lee as explained below teaches that inputs can be related to aromatic note preferences and aversions of fragranced products and the inputs/responses (¶ 0109-0110) are stored in a database in order to recommend a fragrance/cologne. Both Shaya and Lee teach product recommendation management. Shaya teaches in the Abstract: “multivariate analysis to predict or recommend optimal products from a predefined population of commercially available products are disclosed.” Lee teaches in the Abstract “facilitating product preferences and/or product recommendations.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Lee would have yielded predictable results and resulted in an improved system.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Shaya et al., (US 7,809,601 B2) hereinafter “Shaya” in view of Lee et al., (US 2021/0406983 A1) hereinafter “Lee”, Awerbuch et al., (US 2006/0136284 A1) hereinafter “Awerbuch” and Letensorer et al., (WO 2023/159056 A1), hereinafter “Letensorer”
Claim 1:
Shaya as shown discloses a server system (Figures 3 and 4) for supplementing training data, the server system configured to instantiate a backend application instance configured to communicably couple to a frontend application instance instantiated by an end-user computing device, the server system comprising:
a database service storing training data, the training data defined in part by entries in a table, each entry comprising (Figures 5 and 6 illustrates a database and col. 29, lines 54-56: “improve their recommendation quality over time by periodically re-training the product recommendation engine based on consumer feedback”);
a set of properties comprising: demographic data of a prior user of the server system; aromatic note preferences of the prior user; and aromatic note aversions of the prior user; and a set of values each corresponding to a respective one fragranced product, each respective value defined by one of: a purchase by the prior user of the respective one fragranced product; or a review by the prior user of the respective one fragranced product (col. 8, lines 60-65: “collect data on consumer demographics and substrate needs, including consumer preferences for products, the current and historical condition of the substrate to be treated (e.g., consumer's skin), and responses of the substrate to current and historical product uses.” And col. 8, lines 4-14: “include a neural network. The neural network is used to model the relationship(s) (typically non-linear) between the input variables of a served consumer's descriptive variables and the performance and/or preference responses of other consumers to products they have used in combination with the descriptive characterization of those consumers, and output variables of individual product performance and/or preference predictions. The neural network may be trained using actual product performance and preference data of a subset of a relevant population.”);
a memory resource storing instructions for instantiating the backend application instance (Figure 3);
a processing resource configured to cooperate with the memory resource to execute the instructions to instantiate the backend application instance, the backend application instance configured to: instantiate a neural network instance as a trained machine learning model, the trained machine learning model trained from the training data stored by the database service and configured to assign a value to each of a set of fragranced products in response to receiving input data provided by a user of the frontend application instance (col. 8, lines 4-14: “The data processing portion of the invention may include a neural network. The neural network is used to model the relationship(s) (typically non-linear) between the input variables of a served consumer's descriptive variables” i.e., input data provided by a user) “and the performance and/or preference responses of other consumers to products they have used in combination with the descriptive characterization of those consumers” (i.e., the training data), “and output variables of individual product performance and/or preference predictions.” See also col. 8, lines 46-52: “The invention may periodically re-train its data processing portions to more accurately predict product performances and consumer preferences. When the embodiment of the invention utilizes re-training, as the numbers of consumers and multiple feedback entries accumulate, the invention acquires greater precision based on the real world experiences of those consumers.”);
receive first input data from the frontend application instance communicable coupled to the backend application instance, the first input data comprising: demographic data of the user; aromatic note preferences of the user; and aromatic note aversions of the user (col. 34, lines 59-64: “In step 1710, the invention solicits input from the consumer, which is received at step 1715. As discussed above, input may comprise a wide variety of information including personal profile information, concern areas, severity and importance for each concern area, preferences for products used recently, and the like.” See also col. 13, lines 24-29: “In the initial or early interactions with a new consumer, the invention solicits personal profile information (e.g., age, gender, sleep patterns, medical conditions, prescription drug use, known allergies, geographic location, time spent outdoors, vitamin use, diet, and the like) and target concerns from the consumer.” See also figure 3);
provide the received input data to the trained machine learning model as input (col. 8, lines 4-7: “The data processing portion of the invention may include a neural network. The neural network is used to model the relationship(s) (typically non-linear) between the input variables of a served consumer's descriptive variables” i.e., the received input data));
receive from the trained machine learning model, a set of values each value corresponding to a predicted review of one respective fragranced product by the user (col. 28, lines 8-18: “given a consumer's set of input parameters, inputs from other consumers who have used and provided performance and/or preference data, and a trained neural network, the product recommendation engine uses the neural network to generate predictions of performance and/or preference for products which have been used by other consumers, but not necessarily by this consumer. Product recommendation output forms for the given consumer (typically in the form of performance and preference predictions contained in custom constructed recommendation tables) are easily generated from the sorted predictions” see also figure 10);
select a group of fragranced products corresponding to a subset of the set of values satisfying a threshold(col. 28, lines 19-26: “the predicted performance score comprises an overall performance score derived from a performance array for each product recommended to a consumer. Typically there is a performance prediction for each concern identified by the consumer. For each product in the category, the performance matrix is output based on the neural network's model for each performance parameter.” See also col. 18, lines 58-67 to col. 19, lines 1-3: “FIGS. 8A and 8B are exemplary output displays of rank order listings for a top-3 set of products by scored predicted preference. FIGS. 9A and 9B are exemplary output displays of rank order listings for a top-3 set of products by scored predicted performance. Note that even though the displays illustrated in FIGS. 8 and 9 are rank ordered by predicted preference and performance, respectively, each of the displays also present predicted performance and preference respectively for each product in the display. Both utilities need not be presented to consumers together in the same display. Note also that the displays depicted in FIGS. 8 and 9 include a lowest known price for each product listed. Presentation of this information is optional.” See also figures 8A, 8B,9A and 9B);
Shaya teaches in col. 31, lines 9-19: “the neural network uses consumer responses and outputs of the invention's forward intelligence or product recommendation engine as inputs and optimizes product attributes to improve recommendation accuracy in an iterative process. Objectives of this re-training are numerous and include improving the accuracy of future recommendations, generating insights on product performance for the purpose of product development, and the like. The invention may also improve the accuracy of predictions for each consumer as it learns more about the consumer's subjective and/or objective responses to products.” Shaya is silent with regard to the following limitations. However, Lee in an analogous art of product recommendation management for the purpose of providing the following limitations as shown does:
aromatic note preferences of the prior user; and aromatic note aversions of the prior user; one fragranced product (¶ 0109: “the questionnaire may include questions that directly represent values for the subject 102. For example, the questionnaire may expressly ask the subject 102 to input a preference for fragrances, including specific product names, preferred notes, or other fragrance characteristics, such as whether the subject likes masculine, feminine or unisex fragrances, etc.” See also ¶ 0072: “question of the questionnaire, the subject may be asked whether they prefer feminine, masculine or unisex scents. In yet another question of the questionnaire, the subject may be asked whether they prefer the scent to be perceivable, subtle, complimentary or strong. In yet another question of the questionnaire, the subject may be asked to enter her preferred fragrances, including the most recently purchased fragrance.” And ¶ 0084: “feedback received from the subject 102 after having used the recommended product(s) may also be stored in the user data store 316 or forward to the product data store 318 in order to improve future product recommendations by the system 100.”);
set of fragranced products (¶ 0043: “the disclosure relate to recommendations for a fragrance, such a perfume or cologne.”);
aromatic note preferences of the user; aromatic note aversions of the user (¶ 0109: “the questionnaire may include questions that directly represent values for the subject 102. For example, the questionnaire may expressly ask the subject 102 to input a preference for fragrances, including specific product names, preferred notes, or other fragrance characteristics, such as whether the subject likes masculine, feminine or unisex fragrances, etc.” See also ¶ 0072: “question of the questionnaire, the subject may be asked whether they prefer feminine, masculine or unisex scents. In yet another question of the questionnaire, the subject may be asked whether they prefer the scent to be perceivable, subtle, complimentary or strong.” And ¶ 0084);
group of fragranced products, fragranced selection data (¶ 0083: “with the knowledge of the preferred characteristic parameters of the fragrances determined from the GSR data, and/or the questionnaire data (optional) and facial data (optional), the recommendation engine 314 is configured to determine an appropriate product stored in a product data store 318 that matches or is highly correlative to the preferred characteristic parameters determined by the system 100.” See also figure 6, note references character 618 and 620 “present the product recommendation to the subject”);
Both Shaya and Lee teach product recommendation management. Shaya teaches in the Abstract: “multivariate analysis to predict or recommend optimal products from a predefined population of commercially available products are disclosed.” Lee teaches in the Abstract “facilitating product preferences and/or product recommendations.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Lee would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Lee to the teaching of Shaya would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as the fragranced product of Lee, aromatic note preferences of the prior user; and aromatic note aversions of the prior user; one fragranced product; set of fragranced products; aromatic note preferences of the user; aromatic note aversions of the user and group of fragranced products, fragranced selection data into similar systems. Further, as noted by Lee “the techniques and methodologies of the present disclosure transcend product types, and thus, can be used to provide recommendations to the subject for products other than fragrances.” (Lee, ¶ 0043).
Shaya in view of Lee teaches fragranced products as explained above. Shaya in view of Lee is silent with regard to the following limitations. However, Awerbuch in an analogous art of product recommendation management for the purpose of providing the following limitations as shown does:
randomly select a random fragranced product different from the selected group of fragranced products, the selected group of fragranced products and the random fragranced product (¶ 0026: “a product recommendation may be determined using a distributed technique. According to this technique, a product recommendation is made by (a) randomly choosing a product and recommending the chosen product”);
Both Shaya and Awerbuch teach product recommendation management. Shaya teaches in the Abstract: “multivariate analysis to predict or recommend optimal products from a predefined population of commercially available products are disclosed.” Awerbuch teaches in the Abstract “A technique for recommending products to a user (110 a).” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Awerbuch would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Awerbuch to the teaching of Shaya in view of Lee would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as randomly select a random fragranced product different from the selected group of fragranced products into similar systems. Further, as noted by Awerbuch “(a) forming a committee of recommenders from a set of all recommenders, (b) having each recommender in the committee probe (test) all products in a set of products to identify a product to recommend and (c) generating the list of recommended products from products recommended by the recommenders in the committee.” (Awerbuch, ¶ 0007).
Shaya in view of Lee and Awerbuch teaches products recommendations as explained above. Shaya in view of Lee and Awerbuch is silent with regard to the following limitations. However, Letensorer in an analogous art of product recommendation management for the purpose of providing the following limitations as shown does:
defining a sampling kit; and transmit fragranced product selection data to cause the sampling kit to be assembled with the selected group of fragranced products and a random fragranced product and to cause the assembled sampling kit to be provided to the user (¶ 0034: “The fragrance selection module 132 receives a plurality of recommendations generated by the recommendation module 130 and additionally data from the fragrance database 116 and the customer database 118 which is used to select one or more fragrances. Additionally, customer characteristics, preferences, or other weightings may be considered when selecting one or more fragrances. The feedback module 134 receives the selected fragrances and sends a feedback request to the customer. The feedback request may be included with a sample of the recommended fragrances.”);
Both Shaya and Letensorer teach product recommendation management. Shaya teaches in the Abstract: “multivariate analysis to predict or recommend optimal products from a predefined population of commercially available products are disclosed.” Lee teaches in the Abstract “Data is collected related to a plurality of individuals, and which is used to identify whether any of the data are probable predictors of the individuals' preference for one or more fragrances. These predictors are then used to provide recommendations of one or more fragrances to one or more individuals.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Letensorer would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Letensorer to the teaching of Shaya in view of Lee and Awerbuch would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as defining a sampling kit; and transmit fragranced product selection data to cause the sampling kit to be assembled with the selected group of fragranced products and a random fragranced product and to cause the assembled sampling kit to be provided to the user into similar systems. Further, as noted by Letensorer “The ability to match fragrances to user preferences can aid in creating new or personalized formulations of fragrances, marketing of scented products, and may also prove useful in some therapeutic applications. .” (Letensorer, ¶ 0006).
Claim 3:
Shaya as shown discloses the following limitations:
wherein the backend application instance is configured to receive purchase information indicating the user completed a purchase from the group of fragranced products selected by the machine learning model (Figure 12, note reference characters 1205 “Consumer: Selects, Purchase and Uses Product” from Recommendations 1204 which are provided by System’s Forward Intelligence Engine 1203 and 1216 feedback about the product and col. 30, lines 34-36: “Consumers may select and use any product they choose to treat a concern for which they have identified to the system 1200 (e.g., block 1202) and provide feedback about that product (e.g., 1208, 1212, 1216). Block 1208 represents feedback (e.g., new diagnostic measurements and subjective responses) received by the system 1200 from the consumers and incorporated 1212 within the knowledge base of system 1200. Arrows 1215 and 1214, together with block 1213, represent the re-training (sometimes referred to herein as a reverse intelligence engine) of the system's 1200 product recommendation engine (product recommendations 1204 are compared to actual consumer feedback 1208 in order to adjust product attributes 1201).”);
Claim 4:
Shaya as shown discloses the following limitations:
wherein the backend application instance is configured to: create, with the database service, a new entry in the training data comprising: the demographic data of the user; the aromatic note preferences of the user; the aromatic note aversions of the user; and a value satisfying the threshold in respect of the purchased fragranced product (Figure 18, note reference characters 1810 “Products selected and used by population of consumers” and 1815 “Feedback received from population of consumers” for retraining and col. 24, lines 42-46: “a performance response pattern comprises a rank ordering of product performance results in a single concern area or overall for all the products the consumer has used and provided feedback to the invention.” And col. 8, lines 60-65: “collect data on consumer demographics and substrate needs, including consumer preferences for products, the current and historical condition of the substrate to be treated (e.g., consumer's skin), and responses of the substrate to current and historical product uses.”);
Shaya is silent with regard to the following limitations. However, Lee in an analogous art of product recommendation management for the purpose of providing the following limitations as shown does:
the aromatic note preferences of the user; the aromatic note aversions of the user; purchased fragranced product (¶ 0109: “the questionnaire may include questions that directly represent values for the subject 102. For example, the questionnaire may expressly ask the subject 102 to input a preference for fragrances, including specific product names, preferred notes, or other fragrance characteristics, such as whether the subject likes masculine, feminine or unisex fragrances, etc.” See also ¶ 0072: “question of the questionnaire, the subject may be asked whether they prefer feminine, masculine or unisex scents. In yet another question of the questionnaire, the subject may be asked whether they prefer the scent to be perceivable, subtle, complimentary or strong. In yet another question of the questionnaire, the subject may be asked to enter her preferred fragrances, including the most recently purchased fragrance.” And ¶ 0084: “feedback received from the subject 102 after having used the recommended product(s) may also be stored in the user data store 316 or forward to the product data store 318 in order to improve future product recommendations by the system 100.”);
Both Shaya and Lee teach product recommendation management. Shaya teaches in the Abstract: “multivariate analysis to predict or recommend optimal products from a predefined population of commercially available products are disclosed.” Lee teaches in the Abstract “facilitating product preferences and/or product recommendations.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Lee would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Lee to the teaching of Shaya would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as the fragranced product of Lee, the aromatic note preferences of the user; the aromatic note aversions of the user; purchased fragranced product into similar systems. Further, as noted by Lee “the techniques and methodologies of the present disclosure transcend product types, and thus, can be used to provide recommendations to the subject for products other than fragrances.” (Lee, ¶ 0043).
Claim 7:
Shaya as shown discloses the following limitations:
wherein the backend application instance is configured to retrain the trained machine learning model from the updated training data (col. 8, lines 46-52: “The invention may periodically re-train its data processing portions to more accurately predict product performances and consumer preferences. When the embodiment of the invention utilizes re-training, as the numbers of consumers and multiple feedback entries accumulate, the invention acquires greater precision based on the real world experiences of those consumers.” See also figure 18);
Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Shaya et al., (US 7,809,601 B2) hereinafter “Shaya”, Lee et al., (US 2021/0406983 A1) hereinafter “Lee”, Awerbuch et al., (US 2006/0136284 A1) hereinafter “Awerbuch” and Letensorer et al., (WO 2023/159056 A1), hereinafter “Letensorer” as applied to claim 4, further in view of Anschuetz et al., (US 2020/0327372 A1) hereinafter “Anschuetz”.
Claim 5:
Shaya teaches a trained machine learning model for product recommendation in col. 29, lines 54-56: “improve their recommendation quality over time by periodically re-training the product recommendation engine based on consumer feedback.” Lee teaches in ¶ 0081: “the recommendation engine 314 includes a machine learning model for assisting in determining the product recommendation.” And fragranced products as explained above. Letensorer teaches in ¶ 0048: “parameters may be selected to be used to train a machine learning algorithm or to modify the weighting for a weighted values table or process to generate fragrance recommendations.” Shaya in view of Lee, Awerbuch and Letensorer is silent with regard to the following limitations. However, Anschuetz in an analogous art of machine learning model management for the purpose of providing the following limitations as shown does:
wherein the new entry in the training data comprises: a neutral value corresponding to a set of remaining fragranced products different from the group of fragranced products selected by the machine learning model, the neutral value corresponding to a neutral review score (¶ 0016: “an individual (or entity) may submit or provide feedback associated with a product(s) that the individual may have recently purchased,” i.e., different from the group of products selected, ¶ 0017: “use various machine learning techniques to train a machine learning model(s) (or more generally, a mathematical model(s)) using a training dataset, where the training dataset may include a set of feedback entries related to a set of products as well as a classification or category for each of the feedback entries (e.g., positive or negative).” ¶ 0025: “the feedback that is input or generated by the users may be any textual (i.e., alphanumeric) description of a given product, where the feedback may include a combination of positive, negative, or neutral description or feedback of a certain aspect(s) of the product.” See also ¶ 0030: “a convention may classify a sentiment for product descriptions, where the sentiment may be on a scale from “1” (highly negative) to “5” (highly positive).”);
Both Shaya and Anschuetz teach machine learning model management. Shaya teaches in col. 13, lines 2-7: “recommendation engine utilizes a neural network, predictions and actual consumer responses to product use are used periodically to re-train the algorithms residing in the hidden layers so that its future outputs (e.g., product recommendations) correlate more closely with the consumer feedback.” Anschuetz teaches in ¶ 0029: “a machine learning model associated with feedback for a set of products.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Anschuetz would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Anschuetz to the teaching of Shaya in view of Lee, Awerbuch and Letensorer would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as wherein the new entry in the training data comprises: a neutral value corresponding to a set of remaining fragranced products different from the group of fragranced products selected by the machine learning model, the neutral value corresponding to a neutral review score into similar systems. Further, as noted by Anschuetz “offer improved capabilities to solve these problems by dynamically and accurately classifying product feedback, which results in data that effectively identifies product issues, including those issues that may be effectively addressed before further escalation.” (Anschuetz, ¶ 0021).
Claim 6:
Shaya teaches a trained machine learning model for product recommendation in col. 29, lines 54-56: “improve their recommendation quality over time by periodically re-training the product recommendation engine based on consumer feedback.” Lee teaches in ¶ 0081: “the recommendation engine 314 includes a machine learning model for assisting in determining the product recommendation.” And fragranced products as explained above. Letensorer teaches in ¶ 0048: “parameters may be selected to be used to train a machine learning algorithm or to modify the weighting for a weighted values table or process to generate fragrance recommendations.” Shaya in view of Lee, Awerbuch and Letensorer is silent with regard to the following limitations. However, Anschuetz in an analogous art of machine learning model management for the purpose of providing the following limitations as shown does:
wherein the new entry in the training data comprises: a negative-sentiment value corresponding to a set of remaining fragranced products different from the group of fragranced products selected for purchase by the user l, the negative-sentiment value corresponding to a negative review score (¶ 0016: “an individual (or entity) may submit or provide feedback associated with a product(s) that the individual may have recently purchased,” i.e., different from the group of products selected, ¶ 0017: “use various machine learning techniques to train a machine learning model(s) (or more generally, a mathematical model(s)) using a training dataset, where the training dataset may include a set of feedback entries related to a set of products as well as a classification or category for each of the feedback entries (e.g., positive or negative).” ¶ 0025: “the feedback that is input or generated by the users may be any textual (i.e., alphanumeric) description of a given product, where the feedback may include a combination of positive, negative, or neutral description or feedback of a certain aspect(s) of the product.” See also ¶ 0030: “a convention may classify a sentiment for product descriptions, where the sentiment may be on a scale from “1” (highly negative) to “5” (highly positive).”);
Both Shaya and Anschuetz teach machine learning model management. Shaya teaches in col. 13, lines 2-7: “recommendation engine utilizes a neural network, predictions and actual consumer responses to product use are used periodically to re-train the algorithms residing in the hidden layers so that its future outputs (e.g., product recommendations) correlate more closely with the consumer feedback.” Anschuetz teaches in ¶ 0029: “a machine learning model associated with feedback for a set of products.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Anschuetz would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Anschuetz to the teaching of Shaya in view of Lee, Awerbuch and Letensorer would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as wherein the new entry in the training data comprises: a negative-sentiment value corresponding to a set of remaining fragranced products different from the group of fragranced products selected by the machine learning model, the negative-sentiment value corresponding to a negative review score into similar systems. Further, as noted by Anschuetz “offer improved capabilities to solve these problems by dynamically and accurately classifying product feedback, which results in data that effectively identifies product issues, including those issues that may be effectively addressed before further escalation.” (Anschuetz, ¶ 0021).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NADJA CHONG whose telephone number is (571)270-3939. The examiner can normally be reached on Monday-Friday 8:00 am - 2:00 pm ET, Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RUTAO WU can be reached on 571.272.6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NADJA N CHONG CRUZ/
Primary Examiner, Art Unit 3623