Detailed Action
Status of Claims
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Action is in reply to the Amendment filed on 10/28/2025.
Claims 1-15 and 17-20 are currently pending and have been examined. Claims 1, 3, 9-10, 14 have been amended. Claim 15 stands cancelled. The Claim objections have been overcome by amendment
Request for Continued Examination
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/28/2025 has been entered.
Claim Rejection - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claims 1-8 are directed to a process. Therefore, claims 1-8 are directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claim 1 recites at least the following limitations that are believed to recite an abstract idea:
receiving by a recommendation system, from a user, a first input response in reply to a first item bundle comprising a first set of items,
wherein the recommendation system comprises a conversation process, a bundling process, and a question process, wherein the conversation process, the bundling process, and the question process are generated jointly using historical interaction data, and wherein the conversation process, the bundling process, and the question process are based on an architecture;
updating a state model associated with the user to reflect the first input response in reply to the first item bundle;
applying the conversation process to the state model to determine an action type in response to receiving the first input response to the first item bundle;
based on the action type, applying the bundling method to the state model to generate a second item bundle different than the first item bundle;
providing the second item bundle to the user;
receiving, from the user, a second input response in reply to the second item bundle;
updating the state model based on the second input response; and
applying the conversation process to the state model to determine a second action type in response to receiving the second input response to the second item bundle;
based on the second action type, applying the question process to the state model to generate a question related to an attribute of an item in the second item bundle;
receiving a third input response in response to the question; and
updating the state model based on the third input response.
The above limitations recite the concept of item bundle recommendations. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claims 1-8 an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
A user device
Machine-learning modules
Training the ML modules
A self-attentive encoder-decoder architecture
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
The dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. For example, claims 2-3 and 5-8 are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. As for claim 4, these claims are similar to the independent claims except that they recite the further additional elements of fine-tuning the ML modules. These additional elements are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. Therefore the dependent claims do not create an integration for the same reasons.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
A user device
Machine-learning modules
Training the ML modules
A self-attentive encoder-decoder architecture
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claims 9-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claims 9-13 are directed to a machine. Therefore, claims 9-13 are directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claim 9 recites at least the following limitations that are believed to recite an abstract idea:
storing a state model comprising an item candidate pool and an attribute candidate pool;
a recommendation system comprising a conversation process, a bundling process, a question process, and a modeling process, wherein the conversation process, the bundling process, and the question process are generated jointly using historical interaction data, and wherein the conversation process, the bundling process, and the question process are based on an architecture;
the bundling process for generating a first item bundle comprising a set of items;
the modeling process for updating the state model to reflect an input response to the first item bundle;
the conversation process for:
determining, based on the state model, an action type in response to receiving the input response to the first item bundle; and
triggering the bundling process based on the action type being a recommendation action;
the bundling process further for generating and outputting, based on the action type, a second item bundle different than the first item bundle;
the modeling process for updating the state model to reflect a second input response to the second item bundle; and
the conversation process for determining a second action type based on the state model;
the question process generating a question related to an attribute of an item in the second item bundle based on the second action type; and
the modeling process updating the state model to reflect a third input response.
The above limitations recite the concept of item bundle recommendations. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claims 9-13 an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
a memory component
machine-learning modules comprising program code
training the ML modules
a self-attentive encoder-decoder architecture
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
The dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. For example, claims 10-13 are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. Therefore the dependent claims do not create an integration for the same reasons.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
a memory component
machine-learning modules comprising program code
training the ML modules
a self-attentive encoder-decoder architecture
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claims 14-15; 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claims 14-15; 17-20 are directed to a process. Therefore, claims 14-15; 17-20 are directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claim 14 recites at least the following limitations that are believed to recite an abstract idea:
receiving by a recommendation system from a user, a first input response in reply to a first item bundle comprising a set of items,
wherein the recommendation system comprises a conversation process, a bundling process, and a question process, wherein the conversation process, the bundling process, and the question process are generated jointly using historical interaction data, and wherein the conversation process, the bundling process, and the question process are based on an architecture;
updating a state model associated with the user to reflect the first input response in reply to the first item bundle;
applying the conversation process to the state model to determine a first action type in response to receiving the first input response to the first item bundle;
based on the first action type, applying the question process to the state model to generate a question related to one or more items in the first item bundle;
providing the question to the user;
updating the state model to reflect a second input response received from the user in reply to the question; and
applying the conversation process to the state model to determine a second action type in response to receiving the second input response;
based on the second action type, applying the bundling process to the state model to generate a second item bundle different from the first item bundle;
receiving, from the user, a second input response in reply to the second item bundle; and
updating the state model based on the second input response.
The above limitations recite the concept of item bundle recommendations. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claims 14-15, 17-20 an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
A user device
Machine-learning modules
Training the machine learning modules
A self-attentive encoder-decoder architecture
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
The dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. For example, claims 15 & 18-20 are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. As for claims 17 these claims are similar to the independent claims except that they recite the further additional elements of fine-tuning the ML modules. These additional elements are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. Therefore the dependent claims do not create an integration for the same reasons.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
A user device
Machine-learning modules
Training the machine learning modules
A self-attentive encoder-decoder architecture
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claim Interpretation
Examiner notes that the machine-learning modules recited in the claims are interpreted in light of Applicant’s Specification, e.g. [0021], [0001] to be machine learning models.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejection – 35 USC § 103
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-
obviousness.
Claims 1-4, 7-10, 12-15, 17, & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Parker et al (US 20200302506 A1), hereinafter Parker, in view of Sadeh et al (US 20190108353 A1), hereinafter Sadeh, in view of Wuebker et al (US 10878201 B1), hereinafter Wuebker.
Regarding Claim 1, Parker discloses a method comprising:
Receiving, by a recommendation system from a user device associated with a user, a first input response [feedback] in reply to a first item bundle [outfit] comprising a first set of items (Parker: “sequence of outfits … is shown to the customer and the customer can provide different gestures to provide positive or negative feedback. … customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes , etc.” [0025] – “outfit combinations… include a combination of multiple items from different possible categories such as dresses, tops, bottoms, accessories, second layers, outerwear, and/or footwear, etc.” [0099] – The system is a recommendation engine [0017], and the user uses a smartphone device [0020].),
wherein the recommendation system comprises a machine-learning (ML) conversation module, an ML bundling module, and an ML question module, wherein the ML conversation module, the ML bundling module, and the ML question module are trained using historical interaction data (Parker: “the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073] – “The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations… The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202” [0045] – “a user responds to questions regarding the different contexts outfit combination 801 is suited for . A user may provide feedback on whether the user would wear outfit combination 801 for a date night , for a work event, for travelling , etc. Context - based feedback provided by the user can be used to improve the accuracy of a recommendation prediction model” [0103] – “a machine learning model is trained based on outfit combinations matching certain preferences such as style and fit. The training data may be gathered by collecting user preferences and corresponding preferred outfit combinations for an audience of users, some of which may have similar preferences as the target user” [0019] – “The training data to train the models may be based on behavior and/or feedback of the customer, stylist, and/or the designer as stored over time in the feedback data store, sizing profile information related to garments” [0041] – “The customer then provides the customer chosen outfit as feedback, for example, via a photo, a text description, a voice description, a video, etc. … customer feedback on outfit combinations is stored and used to improve the user's preferences and/or context.” [0030-0031] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067] );
updating a state model [profile] associated with the user to reflect the first input response in reply to the first item bundle (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052] – Examiner notes that the term “state model” is interpreted in light of [0025] of Applicant’s Specification to be a “dataset that represents a state of a client, or a conversation” );
applying the ML conversation module to the state model to determine an action type in response to the first input response to the first item bundle (Parker: “At 401, training data is received and prepared. In some embodiments, training data is customer data on outfit combination feedback data” [0062] – “At 407, the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067] );
based on the action type, applying the ML bundling module to the state model to generate a second item bundle different than the first item bundle (Parker: “The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations… The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202. … recommendations engine 211 may be configured to perform the processes described herein, e.g., the processes shown in FIGS. 1 and 3-5, to provide an outfit combination recommendation using customer catalog data from customer catalog data store 202” [0045]);
providing the second item bundle to the user device (Parker: “one or more of the top scoring products / outfits to be offered to a customer” [0045] – “At 307, outfit combinations are provided to a customer. One or more outfit combinations are provided as suggested outfits for the customer” [0058]);
receiving, from the user device, a second input response in reply to the second item bundle (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052] –“The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations… The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202. … recommendations engine 211 may be configured to perform the processes described herein, e.g., the processes shown in FIGS. 1 and 3-5, to provide an outfit combination recommendation using customer catalog data from customer catalog data store 202” [0045] – “As additional data is collected and prepared, new versions of the model are trained and prepared for production use. For example, as customers provide feedback on new outfit combinations, additional feedback information is collected for the outfit and added to a training set for the customer's target category” [0073]);
updating the state model based on the second input response (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052]);
applying the ML conversation module to the state model to determine a second action type in response to receiving the second input response to the second item bundle (Parker: “The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202.” [0045] – “the feedback may indicate the customer prefers certain types of color combinations and dislikes certain style combinations.” [0031] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067]).
While Parker teaches that the process may be updated/repeated as the user provides feedback on new recommendations [0073] and that the modules are trained [0016], it does not specifically teach that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, or the steps of Based on the second action type, applying the ML question module to the state model to generate a question related to an attribute of an item in the second item bundle; Receiving a third input response in response to the question; and Updating the state model based on the third input response.
However, Sadeh teaches recommendation systems [Abstract], including:
Based on the second action type, applying the ML question module to the state model to generate a question related to an attribute of an item in the second item bundle (Sadeh: “personalized questions can be generated when the user accepts, rejects or modifies a recommended setting. FIG. 10 illustrates such a situation, namely User 1 denies Permission 3 to App 21, despite the fact that User 1 belongs to cluster 1 and that the privacy profile for cluster 1 recommends granting permission 3 to apps in App Category2. At this point, … the PPA can generate a personalized question to see whether this rejection of a recommendation can be used to infer a more general privacy preference for User 1, namely by asking something like, “In general, do you feel uncomfortable granting Permission 3 to apps in App Category 2”” [0056] – “information will …be fed into the PPA's local machine learning functionality to … generate additional personalized questions for the user” [0060] – “recommended settings can be presented in bulk with the user having the ability to review them and decide individually which recommendation to accept, reject, or modify” [0055]);
Receiving a third input response in response to the question (Sadeh: “generate a personalized question … like, “In general, do you feel uncomfortable granting Permission 3 to apps in App Category 2”, or, as is assumed in FIG. 10, “In general, do you feel uncomfortable granting Permission 3 to any app?” In the particular instance illustrated in FIG. 10, User 1 answers “yes” to the latter question.” [0056]); and
Updating the state model based on the third input response (Sadeh: “User 1 answers “yes” to the latter question. This in turn results in the system updating the individual privacy preference model for User 1 and noting that, in contrast to the general profile for users in Cluster 1 (part of the collective privacy preference model), User 1 wants to systematically deny Permission 3 to all apps, as denoted by the two asterisks next to “Deny” for the entry for Permission 3 in the rows corresponding to App Category 2 and 3.” [0056]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker would continue to teach applying a machine-learning (ML) module to the state model to determine an action type as a follow-up to the input response to the first item bundle, except that now it would also teach steps of Based on the second action type, applying the ML question module to the state model to generate a question related to an attribute of an item in the second item bundle; Receiving a third input response in response to the question; and Updating the state model based on the third input response, according to the teachings of Sadeh. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved ability to increase the accuracy of recommendations to a user (Sadeh: [0006]).
While Parker/Sadeh do not teach that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, Wuebker teaches a system for presenting a user with predictions and suggestions ([Abstract], Col. 3, lines 45-50), including that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture (Wuebker: “all parts of the neural translation model are trained jointly (end-to-end) … A subnetwork, known as an encoder, is used by the neural network to encode a source sentence for a second subnetwork, known as a decoder, which is used to predict words in the target language. … self-attentive, or other neural network structures may be used for the encoder or decoder.” Col. 3, lines 15-25).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker/Sadeh would continue to teach the ML conversation module, the ML bundling module, and the ML question module are trained using historical interaction data, except that now it would also teach each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, according to the teachings of Wuebker. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved accuracy and speed (Wuebker: Col. 1, lines 20-25).
Regarding Claim 2, Parker/Sadeh/Wuebker teach the method of claim 1,
wherein the first item bundle comprise a first item, selected from an item candidate pool and having a first attribute selected from an attribute candidate pool, and a second item, selected from the item candidate pool and having a second attribute selected from the attribute candidate pool (Parker: “Outfit preferences relate to the preferences the user has for outfit combinations, which include how to combine different individual items including tops, bottoms, shoes, jewelry, outerwear, handbags, etc. In some embodiments, the customer preferences include sizing preferences such as the sizes that best fit the user.” [0022]);
wherein the first input response comprises a rejection of the first item (Parker: “A customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes, etc. As another example, the customer can indicate she or he likes the outfit but prefers the items in different colors that better match her or his skin tone.” [0025] – “the customer may substitute, remove, and/or add different items to a recommended outfit. The customer can provide the changes as feedback for the recommended outfit combination.” [0030]);
wherein updating the state model associated with the user to reflect the first input response in reply to the first item bundle comprises removing the first item from the item candidate pool, based on rejection of the first item (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated.” [0052] – “the user provides feedback on the recommended outfit combinations to further refine the user's preferences and future recommendations. For example, the user can swipe using different gestures to accept or reject different outfit combinations” [0020] – “the feedback is used to update the user's catalog of items.” [0031]); and
wherein the second item bundle excludes the first item (Parker: “one or more of the top scoring products / outfits to be offered to a customer” [0045] – “At 307, outfit combinations are provided to a customer. One or more outfit combinations are provided as suggested outfits for the customer” [0058] – “the user provides feedback on the recommended outfit combinations to further refine the user's preferences and future recommendations.” [0020]).
Regarding Claim 3, Parker/Sadeh/Wuebker teach the method of claim 1, further comprising:
receiving the second input response indicating rejection of the second item bundle (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated.” [0052] – “the user provides feedback on the recommended outfit combinations to further refine the user's preferences and future recommendations. For example, the user can swipe using different gestures to accept or reject different outfit combinations” [0020] – “the feedback is used to update the user's catalog of items.” [0031] – “customers provide feedback on new outfit combinations” [0073]),
wherein the second item bundle comprises a first item and a second item (Parker: “A customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes, etc. As another example, the customer can indicate she or he likes the outfit but prefers the items in different colors” [0025]),
wherein the first item is selected from an item candidate pool of the state model and has a first attribute [style/preference] selected from an attribute candidate pool of the state model, and wherein the second item is selected from the item candidate pool and has a second attribute selected from the attribute candidate pool (Parker: “ a machine learning model trained using outfit combination information gathered across multiple users is used to automatically determining for the target user, at least a portion of one or more recommended outfit combinations of a plurality of physical items among the physical items within the catalog. For example, a machine learning model is trained based on outfit combinations matching certain preferences such as style and fit.” [0019]);
updating the state model to reflect the rejection of the second item bundle (Parker: “As additional data is collected and prepared, new versions of the model are trained and prepared for production use. For example, as customers provide feedback on new outfit combinations , additional feedback information is collected for the outfit and added to a training set for the customer's target category” [0073]);
applying the ML conversation module to the state model to determine the second action type in response to receiving the rejection of the second item bundle (Parker: “At 401, training data is received and prepared. In some embodiments , training data is customer data on outfit combination feedback data” [0062] – “At 407, the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067] ).
Regarding Claim 4, Parker/Sadeh/Wuebker teach the method of claim 3, but Parker/Sadeh do not teach fine-tuning the ML conversation module, the ML bundling module, and the ML question module based on the second input response but Wuebker teaches fine-tuning the ML conversation module, the ML bundling module, and the ML question module based on the second input response (Wuebker: “A neural machine translation system can be adapted to a new domain with a technique called fine tuning. A model which has been fully trained on general domain data serves as the starting point. Training continues in the same fashion on in-domain data. Training can either be performed in batch by leveraging an available domain-relevant bitext, or incrementally” Col. 4, lines 1-10).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wuebker with Parker /Sadeh for the reasons identified above with respect to claim 1.
Regarding Claim 7, Parker/Sadeh/Wuebker teach the method of claim 1,
wherein the first input response to the first item bundle comprises a rejection of a first item in the first item bundle and an acceptance of a second item in the first item bundle (Parker: “sequence of outfits … is shown to the customer and the customer can provide different gestures to provide positive or negative feedback. … customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes , etc.” [0025]); and
wherein updating the state model associated with the user to reflect the first input response to the first item bundle comprises:
removing the first item from an item candidate pool of the state model, based on the rejection of the first item (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated.” [0052] – “the user provides feedback on the recommended outfit combinations to further refine the user's preferences and future recommendations. For example, the user can swipe using different gestures to accept or reject different outfit combinations” [0020] – “the feedback is used to update the user's catalog of items.” [0031]); and
adding the second item to a short-term context of the state model, the short-term context representing accepted items for a target bundle, based on the acceptance of the second item (Parker: “the outfit combinations created can be utilized as base outfits that can be further modified (e.g., adding, swapping, and/or removing items, etc.) to customize the outfit for a customer or group of customers” [0044] - “the outfit combination options are determined in part based on a predicted outfit combination ranked match score … the ranked match score indicates how strongly the customer likes the outfit” [0054] – “feedback on the recommended outfit combinations to further refine … future recommendations.” [0020]).
Regarding Claim 8, Parker/Sadeh/Wuebker teach the method of claim 1,
wherein the state model comprises (i) a long-term preference describing previously accepted item bundles, (ii) a short-term context describing accepted items and accepted attributes for a target item bundle, and (iii) one or more candidate pools describing items that are candidates for the target item bundle (Parker: “the user can swipe using different gestures to accept or reject different outfit combinations” [0020] – “sequence of outfits … is shown to the customer and the customer can provide different gestures to provide positive or negative feedback. … customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes , etc.” [0025] – “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052]); and
wherein updating the state model associated with the user to reflect the first input response to the first item bundle comprises updating the short-term context and the one or more candidate pools (Parker: “At 309 , feedback on the suggested outfit combinations is received . …Feedback may also be provided using a finer granularity such as the customer liked the overall outfit but not one of the items. The feedback may also specify what about the item the user would like changed. … The feedback may be stored and associated with the customer and/or the outfit combination. … At 311, feedback on the provided outfit combinations is stored. For example outfit combination feedback information may be stored in a database such as feedback profile data store … The information about a customer's style and / or outfit combination preference may be extracted to learn and predict over time by a computer system … the outfit combinations for a customer” [0059-0060]).
Regarding Claim 9, Parker discloses a system comprising:
a memory component storing a state model comprising an item candidate pool and an attribute candidate pool (Parker: “a computer program product embodied on a computer readable storage medium … processor configured to execute instructions stored on … a memory coupled to the processor” [0014] – “data store 203 may be configured to store … machine learning models” [0033] – “Outfit preferences relate to the preferences the user has for outfit combinations, which include how to combine different individual items including tops, bottoms, shoes, jewelry, outerwear, handbags, etc. In some embodiments, the customer preferences include sizing preferences such as the sizes that best fit the user.” [0022]);
a recommendation system comprising a machine-learning (ML) conversation module, an ML bundling module, an ML question module, and a modeling module, wherein the ML module, the ML bunding module, and the ML question module are trained using historical interaction data (Parker: “the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073] – “The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations… The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202” [0045] – “a user responds to questions regarding the different contexts outfit combination 801 is suited for . A user may provide feedback on whether the user would wear outfit combination 801 for a date night , for a work event, for travelling , etc. Context - based feedback provided by the user can be used to improve the accuracy of a recommendation prediction model” [0103] – “a machine learning model is trained based on outfit combinations matching certain preferences such as style and fit. The training data may be gathered by collecting user preferences and corresponding preferred outfit combinations for an audience of users, some of which may have similar preferences as the target user” [0019] – “The training data to train the models may be based on behavior and/or feedback of the customer, stylist, and/or the designer as stored over time in the feedback data store, sizing profile information related to garments” [0041] – “The customer then provides the customer chosen outfit as feedback, for example, via a photo, a text description, a voice description, a video, etc. … customer feedback on outfit combinations is stored and used to improve the user's preferences and/or context.” [0030-0031] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067] – The system is a recommendation engine [0017].);
the ML bundling module comprising program code for generating a first item bundle comprising a set of items (Parker: “At 407, the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073] – “Outfit preferences relate to the preferences the user has for outfit combinations, which include how to combine different individual items including tops, bottoms, shoes, jewelry, outerwear, handbags, etc. In some embodiments, the customer preferences include sizing preferences such as the sizes that best fit the user.” [0022]);
the modeling module comprising program code for updating the state model to reflect an input response to the first item bundle (Parker: “sequence of outfits … is shown to the customer and the customer can provide different gestures to provide positive or negative feedback. … customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes , etc.” [0025] – “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052]);
the ML conversation module comprising program code for:
determining, based on the state model, an action type in response to receiving the input response to the first item bundle (Parker: “At 401, training data is received and prepared. In some embodiments , training data is customer data on outfit combination feedback data” [0062] – “At 407, the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067]); and
triggering the ML bundling module based on the action type being a recommendation action (Parker: “The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations” [0045] – “At 501, a request to enroll is received. …As part of enrollment, the customer may provide information” [0075] – “At 505, product options are determined based on the customer and global attributes.” [0080] – “At 513, a product selection and recommended outfit combinations are provided to a customer” [0088] – See Figure 5.);
the ML bundling module further comprising program code for generating and outputting, based on the action type, a second item bundle different than the first item bundle (Parker: “one or more of the top scoring products / outfits to be offered to a customer” [0045] – “At 307, outfit combinations are provided to a customer. One or more outfit combinations are provided as suggested outfits for the customer” [0058]);
the modeling module comprising program code for updating the state model to reflect a second input response to the second item bundle (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052]);
the ML module comprising program code for determining a second action type based on the state model (Parker: “The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202.” [0045] – “the feedback may indicate the customer prefers certain types of color combinations and dislikes certain style combinations.” [0031]);
While Parker teaches that the process may be updated/repeated as the user provides feedback on new recommendations [0073] and that the modules are trained [0016], it does not specifically teach that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, or the steps of The ML question module comprising program code for generating a question related to an attribute of an item in the second item bundle based on the second action type; and The modeling module comprising program code for updating the state model to reflect a third input response.
However, Sadeh teaches recommendation systems [Abstract], including:
The ML question module comprising program code for generating a question related to an attribute of an item in the second item bundle based on the second action type (Sadeh: “personalized questions can be generated when the user accepts, rejects or modifies a recommended setting. FIG. 10 illustrates such a situation, namely User 1 denies Permission 3 to App 21, despite the fact that User 1 belongs to cluster 1 and that the privacy profile for cluster 1 recommends granting permission 3 to apps in App Category2. At this point, … the PPA can generate a personalized question to see whether this rejection of a recommendation can be used to infer a more general privacy preference for User 1, namely by asking something like, “In general, do you feel uncomfortable granting Permission 3 to apps in App Category 2”” [0056] – “information will …be fed into the PPA's local machine learning functionality to … generate additional personalized questions for the user” [0060] – “recommended settings can be presented in bulk with the user having the ability to review them and decide individually which recommendation to accept, reject, or modify” [0055]);
The modeling module comprising program code for updating the state model to reflect a third input response (Sadeh: “generate a personalized question … like, “In general, do you feel uncomfortable granting Permission 3 to apps in App Category 2”, or, as is assumed in FIG. 10, “In general, do you feel uncomfortable granting Permission 3 to any app?” In the particular instance illustrated in FIG. 10, User 1 answers “yes” to the latter question. This in turn results in the system updating the individual privacy preference model for User 1 and noting that, in contrast to the general profile for users in Cluster 1 (part of the collective privacy preference model), User 1 wants to systematically deny Permission 3 to all apps, as denoted by the two asterisks next to “Deny” for the entry for Permission 3 in the rows corresponding to App Category 2 and 3.” [0056]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker would continue to teach determining, based on the state model, an action type in response to receiving the input response to the first item bundle, except that now it would also teach steps of The ML question module comprising program code for generating a question related to an attribute of an item in the second item bundle based on the second action type; and The modeling module comprising program code for updating the state model to reflect a third input response., according to the teachings of Sadeh. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved ability to increase the accuracy of recommendations to a user (Sadeh: [0006]).
While Parker/Sadeh do not teach that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, Wuebker teaches a system for presenting a user with predictions and suggestions ([Abstract], Col. 3, lines 45-50), including that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture (Wuebker: “all parts of the neural translation model are trained jointly (end-to-end) … A subnetwork, known as an encoder, is used by the neural network to encode a source sentence for a second subnetwork, known as a decoder, which is used to predict words in the target language. … self-attentive, or other neural network structures may be used for the encoder or decoder.” Col. 3, lines 15-25).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker/Sadeh would continue to teach the ML conversation module, the ML bundling module, and the ML question module are trained using historical interaction data, except that now it would also teach each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, according to the teachings of Wuebker. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved accuracy and speed (Wuebker: Col. 1, lines 20-25).
Regarding Claim 10, Parker/Sadeh/Wuebker teach the system of claim 9,
wherein the modeling module further comprises program code for receiving the second input response indicating rejection of the second item bundle and updating the state model to reflect the rejection of the second item bundle (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated.” [0052] – “the user provides feedback on the recommended outfit combinations to further refine the user's preferences and future recommendations. For example, the user can swipe using different gestures to accept or reject different outfit combinations” [0020] – “the feedback is used to update the user's catalog of items.” [0031] – “customers provide feedback on new outfit combinations” [0073]),
wherein the second item bundle comprises a first item and a second item (Parker: “A customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes, etc. As another example, the customer can indicate she or he likes the outfit but prefers the items in different colors” [0025]),
wherein the first item is selected from an item candidate pool of the state model and has a first attribute selected from an attribute candidate pool of the state model, and wherein the second item is selected from the item candidate pool and has a second attribute selected from the attribute candidate pool (Parker: “a machine learning model trained using outfit combination information gathered across multiple users is used to automatically determining for the target user, at least a portion of one or more recommended outfit combinations of a plurality of physical items among the physical items within the catalog. For example, a machine learning model is trained based on outfit combinations matching certain preferences such as style and fit.” [0019]); and
wherein the ML module further comprises program code for inputting the state model to determine the second action type in response to receiving the rejection of the second item bundle (Parker: “At 401, training data is received and prepared. In some embodiments , training data is customer data on outfit combination feedback data” [0062] – “At 407, the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073])
Regarding Claim 12, Parker/Sadeh/Wuebker teach the system of claim 9,
wherein the state model comprises a long-term preference describing previously accepted item bundles, a short-term context describing accepted items and accepted attributes for a target item bundle, and one or more candidate pools describing items that are candidates for the target item bundle (Parker: “the user can swipe using different gestures to accept or reject different outfit combinations” [0020] – “sequence of outfits … is shown to the customer and the customer can provide different gestures to provide positive or negative feedback. … customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes , etc.” [0025] – “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052]); and
wherein updating the state model to reflect the input response to the first item bundle comprises updating the short-term context and the one or more candidate pools (Parker: “At 309 , feedback on the suggested outfit combinations is received . …Feedback may also be provided using a finer granularity such as the customer liked the overall outfit but not one of the items. The feedback may also specify what about the item the user would like changed. … The feedback may be stored and associated with the customer and/or the outfit combination. … At 311, feedback on the provided outfit combinations is stored. For example outfit combination feedback information may be stored in a database such as feedback profile data store … The information about a customer's style and / or outfit combination preference may be extracted to learn and predict over time by a computer system … the outfit combinations for a customer” [0059-0060]).
Regarding Claim 13, Parker/Sadeh/Wuebker teach the system of claim 12, wherein: the input response to the first item bundle comprises a rejection of a first item in the first item bundle and an acceptance of a second item in the first item bundle (Parker: “A customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes, etc. As another example, the customer can indicate she or he likes the outfit but prefers the items in different colors that better match her or his skin tone.” [0025] – “the customer may substitute, remove, and/or add different items to a recommended outfit. The customer can provide the changes as feedback for the recommended outfit combination.” [0030]); and
updating the state model to reflect the response to the first item bundle comprises: removing the first item from an item candidate pool of the state model, based on the rejection of the first item ; and adding the second item to a short-term context of the state model, the short-term context representing accepted items for a target bundle, based on the acceptance of the second item (Parker: “A customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes, etc. As another example, the customer can indicate she or he likes the outfit but prefers the items in different colors that better match her or his skin tone.” [0025] – “the customer may substitute, remove, and/or add different items to a recommended outfit. The customer can provide the changes as feedback for the recommended outfit combination.” [0030] – “At 309 , feedback on the suggested outfit combinations is received . …Feedback may also be provided using a finer granularity such as the customer liked the overall outfit but not one of the items. The feedback may also specify what about the item the user would like changed. … The feedback may be stored and associated with the customer and/or the outfit combination. … At 311, feedback on the provided outfit combinations is stored. For example outfit combination feedback information may be stored in a database such as feedback profile data store … The information about a customer's style and / or outfit combination preference may be extracted to learn and predict over time by a computer system … the outfit combinations for a customer” [0059-0060]).
Regarding Claim 14, Parker discloses a method comprising:
receiving by a recommendation system from a user device associated with a user, a first input response [feedback] in reply to a first item bundle [outfit] comprising a set of items (Parker: “sequence of outfits … is shown to the customer and the customer can provide different gestures to provide positive or negative feedback. … customer can indicate she or he likes the outfit but prefers a top in a different material or a different style of shoes , etc.” [0025] – “outfit combinations… include a combination of multiple items from different possible categories such as dresses, tops, bottoms, accessories, second layers, outerwear, and/or footwear, etc.” [0099] – The system is a recommendation engine [0017], and the user uses a smartphone device [0020].),
wherein the recommendation system comprises a machine-learning (ML) conversation module, an ML bundling module, and an ML question module, wherein the ML module, the ML bundling module, and the ML question module are trained using historical interaction data (Parker: “the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073] – “The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations… The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202” [0045] – “a user responds to questions regarding the different contexts outfit combination 801 is suited for . A user may provide feedback on whether the user would wear outfit combination 801 for a date night , for a work event, for travelling , etc. Context - based feedback provided by the user can be used to improve the accuracy of a recommendation prediction model” [0103] – “a machine learning model is trained based on outfit combinations matching certain preferences such as style and fit. The training data may be gathered by collecting user preferences and corresponding preferred outfit combinations for an audience of users, some of which may have similar preferences as the target user” [0019] – “The training data to train the models may be based on behavior and/or feedback of the customer, stylist, and/or the designer as stored over time in the feedback data store, sizing profile information related to garments” [0041] – “The customer then provides the customer chosen outfit as feedback, for example, via a photo, a text description, a voice description, a video, etc. … customer feedback on outfit combinations is stored and used to improve the user's preferences and/or context.” [0030-0031] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067] );
updating a state model [profile] associated with the user to reflect the first input response in reply to the first item bundle (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052]);
applying the ML conversation module to the state model to determine a first action type in response to receiving the first input response to the first item bundle (Parker: “At 401, training data is received and prepared. … training data is customer data on outfit combination feedback data” [0062] – “At 407, the trained machine learning model … is transferred into a machine learning engine, such as recommendation engine 211 of FIG. 2 for generating outfit combination recommendations” [0073] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067]);
updating the state model to reflect a second input response received from the user device (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052] –“The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations… The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202. … recommendations engine 211 may be configured to perform the processes described herein, e.g., the processes shown in FIGS. 1 and 3-5, to provide an outfit combination recommendation using customer catalog data from customer catalog data store 202” [0045] – “As additional data is collected and prepared, new versions of the model are trained and prepared for production use. For example, as customers provide feedback on new outfit combinations, additional feedback information is collected for the outfit and added to a training set for the customer's target category” [0073]);
applying the ML conversation module to the state module to determine a second action type in response to receiving the second input response (Parker: “The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202.” [0045] – “the feedback may indicate the customer prefers certain types of color combinations and dislikes certain style combinations.” [0031] – “a customer may provide feedback (e.g., text) when they receive an outfit combination recommendation. The feedback provided by the customer may be processed with NLP techniques to extract features.” [0067]);
based on the second action type, applying the ML bundling module to the state model to generate a second item bundle different from the first item bundle (Parker: “The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations… The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202. … recommendations engine 211 may be configured to perform the processes described herein, e.g., the processes shown in FIGS. 1 and 3-5, to provide an outfit combination recommendation using customer catalog data from customer catalog data store 202” [0045]);
receiving from the user device a second input response in reply to the second item bundle (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052] –“The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations… The customer's feedback may be used to improve the machine learning training models and may be stored in feedback profile data store 201 and/or customer catalog data store 202. … recommendations engine 211 may be configured to perform the processes described herein, e.g., the processes shown in FIGS. 1 and 3-5, to provide an outfit combination recommendation using customer catalog data from customer catalog data store 202” [0045] – “As additional data is collected and prepared, new versions of the model are trained and prepared for production use. For example, as customers provide feedback on new outfit combinations, additional feedback information is collected for the outfit and added to a training set for the customer's target category” [0073]); and
updating the state model based on the second input response (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated .” [0052]).
While Parker teaches that the process may be updated/repeated as the user provides feedback on new recommendations [0073] and that the modules are trained [0016], it does not specifically teach that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, or the steps of Based on the first action type, applying the ML question module to the state model to generate a question related to one or more items in the first item bundle; providing the question to the user device; and that the second input response received from the user is in reply to the question.
However, Sadeh teaches recommendation systems [Abstract], including:
Based on the first action type, applying the ML question module to the state model to generate a question related to one or more items in the first item bundle (Sadeh: “personalized questions can be generated when the user accepts, rejects or modifies a recommended setting. FIG. 10 illustrates such a situation, namely User 1 denies Permission 3 to App 21, despite the fact that User 1 belongs to cluster 1 and that the privacy profile for cluster 1 recommends granting permission 3 to apps in App Category2. At this point, … the PPA can generate a personalized question to see whether this rejection of a recommendation can be used to infer a more general privacy preference for User 1, namely by asking something like, “In general, do you feel uncomfortable granting Permission 3 to apps in App Category 2”” [0056] – “information will …be fed into the PPA's local machine learning functionality to … generate additional personalized questions for the user” [0060] – “recommended settings can be presented in bulk with the user having the ability to review them and decide individually which recommendation to accept, reject, or modify” [0055]);
providing the question to the user device (Sadeh: “ the PPA can generate a personalized question to see whether this rejection of a recommendation can be used to infer a more general privacy preference for User 1, namely by asking something like, “In general, do you feel uncomfortable granting Permission 3 to apps in App Category 2”, or, as is assumed in FIG. 10, “In general, do you feel uncomfortable granting Permission 3 to any app?” In the particular instance illustrated in FIG. 10, User 1 answers “yes” to the latter question. ” [0056]);
and that the second input response received from the user is in reply to the question (Sadeh: “generate a personalized question … like, “In general, do you feel uncomfortable granting Permission 3 to apps in App Category 2”, or, as is assumed in FIG. 10, “In general, do you feel uncomfortable granting Permission 3 to any app?” In the particular instance illustrated in FIG. 10, User 1 answers “yes” to the latter question.” [0056]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker would continue to teach applying the ML conversation module to the state model to determine a first action type in response to receiving the first input response to the first item bundle, except that now it would also teach steps of Based on the first action type, applying the ML question module to the state model to generate a question related to one or more items in the first item bundle; providing the question to the user device; and that the second input response received from the user is in reply to the question, according to the teachings of Sadeh. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved ability to increase the accuracy of recommendations to a user (Sadeh: [0006]).
While Parker/Sadeh do not teach that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, Wuebker teaches a system for presenting a user with predictions and suggestions ([Abstract], Col. 3, lines 45-50), including that each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture (Wuebker: “all parts of the neural translation model are trained jointly (end-to-end) … A subnetwork, known as an encoder, is used by the neural network to encode a source sentence for a second subnetwork, known as a decoder, which is used to predict words in the target language. … self-attentive, or other neural network structures may be used for the encoder or decoder.” Col. 3, lines 15-25).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker/Sadeh would continue to teach the ML conversation module, the ML bundling module, and the ML question module are trained using historical interaction data, except that now it would also teach each of the ML modules are jointly trained and are based on a self-attentive encoder-decoder architecture, according to the teachings of Wuebker. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved accuracy and speed (Wuebker: Col. 1, lines 20-25).
Regarding Claim 15, Parker/Sadeh/Wuebker teach the method of claim 14, further comprising:
based on the second action type, applying the ML bundling module to the state model to generate a second item bundle different than the first item bundle (Parker: “The recommendations engine 211 may be configured to employ adaptive machine learning to provide … outfit combination recommendations” [0045]); and
recommending the second item bundle (Parker: “one or more of the top scoring products / outfits to be offered to a customer” [0045] – “At 307, outfit combinations are provided to a customer. One or more outfit combinations are provided as suggested outfits for the customer” [0058]).
Regarding Claim 17, Parker/Sadeh/Wuebker teach the method of claim 16, but Parker/Sadeh do not teach fine-tuning the ML conversation module, the ML bundling module, and the ML question module using the second input response but Wuebker teaches fine-tuning the ML conversation module, the ML bundling module, and the ML question module using the second input response (Wuebker: “A neural machine translation system can be adapted to a new domain with a technique called fine tuning. A model which has been fully trained on general domain data serves as the starting point. Training continues in the same fashion on in-domain data. Training can either be performed in batch by leveraging an available domain-relevant bitext, or incrementally” Col. 4, lines 1-10).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wuebker with Parker /Sadeh for the reasons identified above with respect to claim 14.
Regarding Claim 20, Parker/Sadeh/Wuebker teach the method of claim 14, wherein updating the state model associated with the user to reflect the first input response in reply to the first item bundle comprises:
removing one or more items from an item candidate pool of the state model, wherein a recommended item bundle comprises items selected from the item candidate pool (Parker: “When the customer … provides feedback on products, customer attributes may be updated. For example, the customer profile and feedback may be updated.” [0052] – “the user provides feedback on the recommended outfit combinations to further refine the user's preferences and future recommendations. For example, the user can swipe using different gestures to accept or reject different outfit combinations” [0020] – “the feedback is used to update the user's catalog of items.” [0031]); and
adding one or more items to a short-term context of the state model, wherein the short- term context indicates accepted items in a target item bundle (Parker: “the outfit combinations created can be utilized as base outfits that can be further modified (e.g., adding, swapping, and/or removing items, etc.) to customize the outfit for a customer or group of customers” [0044] “the outfit combination options are determined in part based on a predicted outfit combination ranked match score … the ranked match score indicates how strongly the customer likes the outfit” [0054] – “feedback on the recommended outfit combinations to further refine … future recommendations.” [0020]).
Claims 5-6, 11, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Parker/Sadeh/Wuebkher, and further in view of Danker et al (US 20130174189 A1), hereinafter Danker.
Regarding Claim 5, Parker/Sadeh/Wuebker teach the method of claim 3, but do not specifically teach that updating the state model based on the third input response comprises: updating the attribute candidate pool by removing the first attribute to obtain attribute candidate pool; and updating the item candidate pool by removing one or more items having the first attribute to obtain an updated item candidate pool.
However, Danker teaches a feedback-based recommendation system (Danker: Abstract), including that updating the state model based on the third input response comprises: updating the attribute candidate pool by removing the first attribute to obtain attribute candidate pool; and updating the item candidate pool by removing one or more items having the first attribute to obtain an updated item candidate pool (Danker: “recommended media con tent is determined by removing from a list of available content (given by the EPG data), media content that is associated with negative user feedback, as described with reference to block 306. As negative user feedback is gathered over time, areas of a user's non-interest (or dislike) become clearer, resulting in more accurate recommendations.” [0031] – See Also Claims 1 & 9.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker/Sadeh/Wuebker would continue to teach updating the state model based on the third input response, except that now it would also teach that updating the state model based on the third input response comprises: updating the attribute candidate pool by removing the first attribute to obtain attribute candidate pool; and updating the item candidate pool by removing one or more items having the first attribute to obtain an updated item candidate pool, according to the teachings of Danker. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved accuracy of recommendations (Danker: [0031]).
Regarding Claim 6, Parker/Sadeh/Wuebker /Danker teach the method of claim 5, further comprising applying the ML bundling module to the state model to generate a third item bundle based on the updated item candidate pool and the updated attribute candidate pool (Parker: “As additional data is collected and prepared, new versions of the model are trained and prepared for production use. For example, as customers provide feedback on new outfit combinations , additional feedback information is collected for the outfit and added to a training set for the customer's target category” [0073]).
Regarding Claim 11, Parker/Sadeh/Wuebker teach the system of claim 10,
wherein the ML bundling module further comprises program code for inputting the state model to generate a third item bundle based on the updated item candidate pool and the updated attribute candidate pool (Parker: “As additional data is collected and prepared, new versions of the model are trained and prepared for production use. For example, as customers provide feedback on new outfit combinations , additional feedback information is collected for the outfit and added to a training set for the customer's target category” [0073]),
but do not specifically teach that updating the state model based on the third input response comprises: updating the attribute candidate pool by removing the first attribute to obtain an updated attribute candidate pool, and updating the item candidate pool by removing one or more items having the first attribute to obtain an updated item candidate pool.
However, Danker teaches a feedback-based recommendation system (Danker: Abstract), including that updating the state model based on the third input response comprises: updating the attribute candidate pool by removing the first attribute to obtain an updated attribute candidate pool, and updating the item candidate pool by removing one or more items having the first attribute to obtain an updated item candidate pool (Danker: “recommended media con tent is determined by removing from a list of available content (given by the EPG data), media content that is associated with negative user feedback, as described with reference to block 306. As negative user feedback is gathered over time, areas of a user's non-interest (or dislike) become clearer, resulting in more accurate recommendations.” [0031] – See Also Claims 1 & 9.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker/Sadeh/Wuebker would continue to teach updating the state model based on the third input response, except that now it would also teach that updating the state model based on the third input response comprises: updating the attribute candidate pool by removing the first attribute to obtain an updated attribute candidate pool, and updating the item candidate pool by removing one or more items having the first attribute to obtain an updated item candidate pool, according to the teachings of Danker. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved accuracy of recommendations (Danker: [0031]).
Regarding Claim 18, Parker/Sadeh/Wuebker teach the method of claim 14, but does not specifically teach that updating the state model to reflect the second input response received from the user device in reply to the question comprises updating an attribute candidate pool of the state model to remove one or more attributes, wherein a recommended item bundle is selected according to attributes in the attribute candidate pool.
However, Danker teaches a feedback-based recommendation system (Danker: Abstract), including that updating the state model to reflect the second input response received from the user device in reply to the question comprises updating an attribute candidate pool of the state model to remove one or more attributes, wherein a recommended item bundle is selected according to attributes in the attribute candidate pool (Danker: “recommended media con tent is determined by removing from a list of available content (given by the EPG data), media content that is associated with negative user feedback, as described with reference to block 306. As negative user feedback is gathered over time, areas of a user's non-interest (or dislike) become clearer, resulting in more accurate recommendations.” [0031] – See Also Claims 1 & 9.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker/Sadeh/Wuebker would continue to teach updating the state model to reflect the second input response received in reply to the question, except that now it would also teach that updating the state model to reflect the second input response received from the user device in reply to the question comprises updating an attribute candidate pool of the state model to remove one or more attributes, wherein a recommended item bundle is selected according to attributes in the attribute candidate pool, according to the teachings of Danker. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved accuracy of recommendations (Danker: [0031]).
Regarding Claim 19, Parker/Sadeh/Wuebker teach the method of claim 14, but does not specifically teach that updating the state model to reflect the second input response received from the user device in reply to the question further comprises updating a category candidate pool of the state model to remove one or more categories, wherein a recommended item bundle is selected according to categories in the category candidate pool.
However, Danker teaches a feedback-based recommendation system (Danker: Abstract), including that updating the state model to reflect the second input response received from the user device in reply to the question further comprises updating a category candidate pool of the state model to remove one or more categories, wherein a recommended item bundle is selected according to categories in the category candidate pool (Danker: “recommended media con tent is determined by removing from a list of available content (given by the EPG data), media content that is associated with negative user feedback, as described with reference to block 306. As negative user feedback is gathered over time, areas of a user's non-interest (or dislike) become clearer, resulting in more accurate recommendations.” [0031] – See Also Claims 1 & 9.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Parker/Sadeh/Wuebker would continue to teach updating the state model to reflect the second input response received in reply to the question, except that now it would also teach that updating the state model to reflect the second input response received from the user device in reply to the question further comprises updating a category candidate pool of the state model to remove one or more categories, wherein a recommended item bundle is selected according to categories in the category candidate pool, according to the teachings of Danker. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved accuracy of recommendations (Danker: [0031]).
Response to Arguments
Applicant’s arguments filed 10/28/2025 have been fully considered but are not persuasive.
Claim Rejection – 35 §USC 101
Applicant argues that “the alleged abstract idea is integrated into a practical application,” specifically arguing that the claims provide “a technical improvement to the technical area of item bundle recommendations by an online platform,” stating that claim 1 includes three ML-based modules which are trained jointly, and “the claims also embody a multi-round conversational recommendation approach,” and specifies that the ML modules are “based on a self-attentive encoder-decoder architecture.” Applicant argues that this combination of features “overcomes the drawbacks of the existing bundle recommendation systems, such as interaction sparsity and large output space, and improves the technology of item bundle recommendation with more accuracy and effectiveness.” Applicant concludes that the claims are “directed to a technical improvement of an existing technology rather than an abstract idea.”
Examiner disagrees. Argued limitations such as the ability to engage the user in conversation, generate & present questions, and bundle items for recommendation, in a multi-round conversational approach, are part of the abstract idea itself, such that the alleged improvement is at best a business improvement stemming from the abstract idea rather than from the additional elements. The additional elements recited, e.g. the ML modules and their architecture, are recited at a high level of generality. Rather than offering a specific improvement to the functioning of recommender-system technology, the additional elements provide only a general linking to a technical field, such that they amount to mere instructions to apply the abstract idea to a technological environment [MPEP 2106.05(f)].
Claim Rejection – 35 §USC 103
Applicant argues with respect to claims 1, 9, and 14 that none of the cited references disclose or make obvious “wherein the ML conversation module, the ML bundling module, and the ML question module are trained jointly using historical interaction data, and wherein the ML conversation module, the ML bundling module, and the ML question module are based on a self-attentive encoder-decoder architecture.”
Examiner partially disagrees. Parker discloses a recommendation system [0017] in which a user provides feedback on a generated outfit [0052] comprising multiple items [0099]. Trained machine learning models are used to generate subsequent recommendations [0073] adaptively based on user feedback, including user responses to questions [0103]. These models are trained based on historical behavior and feedback of the user [0041-0045], and allow the system to communicate with the user about each recommended grouping/outfit of items, to generate new outfit groupings, and to process feedback, including query responses from the user. The modules allow conversational interaction with the user, such as textual or voice feedback [0030—031], which is processed by natural language processing [0067]. Argued reference Moon is not relied upon in the rejection above, with Examiner instead relying upon newly-cited reference Wuebker to teach that each of the ML modules of a system may be jointly trained using self-attentive encoder and decoder structures [Col. 3].
Applicant further argues that the dependent claims each “depends from an allowable independent claim and therefore is allowable over the cited reference for at least the reasons set forth above with regard to their respective base claims.”
Examiner disagrees for the reasons recited in the Rejection and the Response above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Semarjian et al (US 20210182934 A1) teaches machine-learned outfit recommendations, which allows users to provide answers to questions to capture user preferences on recommended items.
Selinger et al (US 20100250336 A1) teaches multi-product machine-learned recommendations that are updated in real time subject to user rejections.
Beckham (US 20170011452 A1) teaches a system for suggesting outfits, which allows a user to replace specific items in a suggested outfit with another item from a pool.
Kolawa et al (US 6,370,513 B1) teaches recommendation systems that can recommend a group of items together, such as ingredients, and ask questions about specifically why each one was rejected based on user feedback.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J SULLIVAN whose telephone number is (571)272-9736. The examiner can normally be reached Mon - Fri 8-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached on (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.J.S./Examiner, Art Unit 3689
/MARISSA THEIN/Supervisory Patent Examiner, Art Unit 3689