DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/24/2025 has been entered.
Status of Claims
Claims 1-11, 13, and 15-22 submitted on 09/24/2025 are pending and have been examined. Claims 1, 3, 6, 8-11, 13, 16, 18-22 have been amended. Claims 12 and 14 been canceled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
No foreign priority or domestic benefit was claimed by the applicant and the application has been examined with respect to its filing date of 01/30/2023.
Claim Objections
Claim 21 is objected to because of the following informalities:
Claim 21 recites, “non-transitory computer-readable…” on page 9 of the claims submitted on 09/24/2025. For purposes of compact prosecution and clarity on the record, Examiner will interpret the limitation as, “A non-transitory computer-readable…”
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11, 13, and 15-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Step 1
Claims 1-10 are directed to a machine, claims 11, 13, 15-20 are directed to a process and claims 21-22 are directed to an article of manufacture (see MPEP 2106.03).
Step 2A, Prong 1
Claim 1, taken as representative, recites at least the following limitations that recite an abstract idea:
a system comprising:
identifying, using a proactive low discoverability identification model, first products having discoverabilities within a marketplace that are low relative to discoverabilities of other products published on the marketplace by:
receiving historical engagement information for products in the marketplace;
clustering the products of a product type into clusters according to a set of attributes;
identifying a cluster of the products with a highest quantity of the products as a first subset of the products;
identifying a second subset of the products, from one or more clusters other than the identified cluster, that are similar to the first subset of the products based on at least one similarity criterion; and
determining a third subset of the products by filtering the second subset of the products based on comparing the historical engagement information for the second subset of the products to the historical engagement information for the first subset of the products and identifying the third subset of the products as a portion of the second subset of the products that are less than an engagement threshold of an engagement for the first subset of the products;
displaying the third subset of the products with increased frequency on the marketplace based on one or more searches in the marketplace; and
determining, using an impact assessment model, an impact on the discoverabilities of the first products based on displaying the third subset of the products with increased frequency by:
determining at least one benchmark for the first subset of the products based on the historical engagement information for the first subset of the products; and
determining a lift score for the third subset of the products based on the at least one benchmark and engagement information for the third subset of the products, as displayed with the increased frequency.
The above limitation, under its broadest reasonable interpretation, falls within the “Mental Processes” grouping of abstract ideas, enumerated in MPEP 2106.04(a)(2)(III), in that it recites concepts performed in the human mind (including an observation, evaluation, judgement, opinion). Further, the broadest reasonable interpretation of the limitations also encompasses “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106.04(a)(2)(II), in that it recites a commercial interaction (see at least ¶0002 of the specification). Claims 11 and 21 recites similar limitations as claim 1.
Thus, under Prong 1 of Step 2A, claims 1, 11, and 21 recite an abstract idea.
Step 2A, Prong 2
Claim 1 includes the following additional elements that are bolded:
a system comprising:
one or more processors; and
one or more non-transitory computer-readable media storing computing instructions that, when executed on the one or more processors, cause the one or more processors to perform:
identifying, using a proactive low discoverability identification model, first products having discoverabilities within a web-based marketplace that are low relative to discoverabilities of other products published on the web-based marketplace by:
receiving historical engagement information for products in the web- based marketplace;
clustering the products of a product type into clusters according to a set of attributes;
identifying a cluster of the products with a highest quantity of the products as a first subset of the products;
identifying a second subset of the products, from one or more clusters other than the identified cluster, that are similar to the first subset of the products based on at least one similarity criterion; and
determining a third subset of the products by filtering the second subset of the products based on comparing the historical engagement information for the second subset of the products to the historical engagement information for the first subset of the products and identifying the third subset of the products as a portion of the second subset of the products that are less than an engagement threshold of an engagement for the first subset of the products;
displaying the third subset of the products on one or more graphical user interface (GUIs) with increased frequency on the web-based marketplace based on one or more web-based searches in the web-based marketplace; and
determining, using an impact assessment model, an impact on the discoverabilities of the first products based on displaying the third subset of the products with increased frequency by:
determining at least one benchmark for the first subset of the products based on the historical engagement information for the first subset of the products; and
determining a lift score for the third subset of the products based on the at least one benchmark and engagement information for the third subset of the products, as displayed with the increased frequency.
Claims 11 and 21 include the same additional elements as claim 1.
The additional elements recited in claims 1, 11, and 21 merely invoke such elements as a tool to perform the abstract idea and generally link the use of the abstract idea to a particular technological environment of GUIs and web-based marketplaces (see MPEP 2106.05(f) and MPEP 2106.05(h). These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration (see Fig. 2; ¶¶0022-0024).
As such, under Prong 2 of Step 2A, when considered both individually and as a whole, the additional elements do not integrate the judicial exception into a practical application and, thus, claims 1, 11, and 21 are directed to an abstract idea.
Step 2B
As noted above, while the recitation of the additional elements in independent claims 1, 11, and 21 are acknowledged, claims 1, 11, and 21 merely invoke such additional elements as a tool to perform the abstract idea and generally link the use of the abstract idea to a particular technological environment (see MPEP 2106.05(f) and MPEP 2106.05(h)).
Even when considered as an ordered combination, the additional elements of claim 1, 11, and 21 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1, 11, and 21 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05).
As such, independent claims 1, 11, and 21 are ineligible.
Dependent claims 2, 4, 6-10, 16-20, and 22 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because they do not add “significantly more” to the abstract idea. More specifically, dependent claims 2, 4, 6-10, 16-20, and 22 merely further define the abstract limitations of claims 1, 11, and 21 or provide further embellishments of the limitations recited in independent claim 1, 11, and 21. Claims 2, 4, 6-10, 16-20, and 22 do not introduce any further additional elements. Thus, dependent claims 2, 4, 6-10, 16-20, and 22 are ineligible.
Furthermore, it is noted that certain dependent claims recite additional elements supplemental to those recited in independent claims 1, 11, and 21: using k-means clustering (claims 3 and 13) and an output of an audio similarity algorithm (claims 5 and 15). However, these elements do not integrate the abstract idea into a practical application because they merely amount to using a computer to apply the abstract idea to a particular technological environment or field of use and thus do not act to integrate the abstract idea into a practical application of the abstract idea. Additionally, the additional elements do not amount to significantly more because they merely amount to using a computer to apply the abstract idea and amount to no more than a general link of the use of the abstract idea to a particular technological environment.
Thus, dependent claims 3, 5, 13, and 15 are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6, 11, 13, 16, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (US 7,689,457 B2 [previously cited]) in view of Byrne et al. (US 2013/0041779 A1 [previously cited]) in view of Kumar et al. (US 2020/0311108 A1) in view of Decker et al. (US 2024/0152512 A1 [previously cited]).
Regarding Claim 1, Chan discloses a system comprising:
one or more processors (Fig. 9; Col 22, lines 31-42); and
one or more non-transitory computer-readable media storing computing instructions that, when executed on the one or more processors, cause the one or more processors to perform (Fig. 9; Col 22, lines 31-42[The services and other application components… may be implemented in software code modules executed by any number of general purpose computers or processors, with different services optionally… The various data repositories 104, 108, 120 may similarly be implemented using any type of computer storage, and may be implemented using databases, flat files, or any other type of computer storage architecture]):
using a proactive low discoverability identification model, published on the web-based marketplace by (Figs. 1-2; Col. 7, lines 29-32[In block 40, the selected source items are used to generate recommendations for the target user]; Examiner notes that the computer system performing the steps in Fig. 2 is comparable to a proactive low discoverability identification model and that the steps are comparable to identifying product recommendations):
receiving historical engagement information for products in the web- based marketplace (Figs. 7 and 9; Col. 20, line 50 to Col. 21, line 23[a web-based system that provides functionality for users to browse and purchase items from an electronic catalog…web servers 100 provide user access to a catalog of items represented in a database 108 or collection of databases. The items preferably include or consist of items that may be purchased via the web site… The system also includes a data repository 116 (e.g., one or more databases) that stores various types of user data… For example, the data repository 116 may store users’ purchase histories, item viewing histories, item ratings, and item tags. The purchase histories and item viewing histories may be stored as lists of item identifiers together with associated event timestamps. The various types of user data may be accessible to other components of the system via a data service (not shown), which may be implemented as a web service]; Examiner notes storing users’ purchase histories, item viewing histories, item ratings, and item tags is comparable to receiving historical engagement information in view of ¶0051 of the applicant’s specification);
clustering the products of a product type into clusters according to a set of attributes (Figs. 1-2; Col. 4, lines 20-60[in FIG. 1, the collection consists of items purchased by the user, and each point rep resents a purchased item… the collection may additionally or alternatively be based on other types of item “selection' actions (e.g., rentals, views, downloads, shopping cart adds, wish list adds, Subscription purchases, etc.). The distance between each pair of items (points) in FIG. 1 represents the calculated degree to which the items are similar, with relatively small distances representing relatively high degrees of similarity. Any appropriate distance metric(s) can be used for item clustering. For example, if the items are represented in a hierarchical browse structure such as a directed acyclic graph, each item may be represented as a vector of the browse nodes or categories in which it falls… The respective vectors of two items can then be compared to compute the distance between the items… The distances between the items may additionally or alter natively be calculated based on other criteria. For example, the distance between two items, A and B, may be calculated based on any one or more of the following: (a) the relative frequency with which A and B co-occur within purchase histories of users, (b) the relative frequency with which A and B co-occur within item viewing histories of users, (c) the relative frequency with which users tag A and B with the same textual tag, (d) the relative frequency with which A and B co-occur within results of keyword searches, (e) the degree to which A and B contain or are characterized by common keywords. The foregoing are merely examples; numerous other criteria may be used to calculate the item distances] in view of Col. 6, lines 42-52[As depicted by block 30, the relevant item collection for the target user is initially retrieved. This collection may, for example, include or consist of items the target user has purchased, rented, viewed, downloaded, rated, added to a shopping cart, or added to a wish list. The items may be products represented in an electronic catalog, or may be some other type of item (e.g., web sites) that is amenable to clustering]);
identifying a cluster of the products with a highest quantity of the products as a first subset of the products (Col. 7, lines 4-14[The cluster scores may be based on a variety of factors, such as some or all of the following: (1) the number of items in the cluster, (2) the distance of the cluster from other clusters, (3) the cluster's homogeneity, (4) the ratings, if any, of items included in the cluster, (5) the purchase dates, if any, of the items in the cluster, (6) if applicable, the extent to which the items that the cluster contains are close to items that represent known gift purchases. The Sources may, for example, be selected from the highest scored clusters only.]; examiner notes that scoring clustered based on the number of items then selecting the highest scored cluster is comparable to identifying a cluster with a highest quantity of the products);
identifying a second subset of the products, that are similar to the first subset of the products based on at least one similarity criterion (Col. 7, lines 1-18[As one example, a score may be generated for each cluster, and these scores may be used to select the clusters from which the source items are obtained. The cluster scores may be based on a variety of factors, such as some or all of the following: (1) the number of items in the cluster, (2) the distance of the cluster from other clusters, (3) the cluster's homogeneity, (4) the ratings, if any, of items included in the cluster, (5) the purchase dates, if any, of the items in the cluster, (6) if applicable, the extent to which the items that the cluster contains are close to items that represent known gift purchases. The Sources may, for example, be selected from the highest scored clusters only, with addition ally item-specific criteria optionally used to select specific items from these clusters]; Examiner notes that scoring clusters based on distance from one cluster to another (similarity) and selecting the highest scored cluster is comparable to identifying a second subset of products similar to the first subset); and
determining a third subset of the products by filtering the second subset of the products based on comparing the historical engagement information for the second subset of the products to the historical engagement information for the first subset of the products and identifying the third subset of the products as a portion of the second subset of the products (Col. 7, lines 1-18[As one example, a score may be generated for each cluster, and these scores may be used to select the clusters from which the source items are obtained. The cluster scores may be based on a variety of factors, such as some or all of the following: (1) the number of items in the cluster, (2) the distance of the cluster from other clusters, (3) the cluster's homogeneity, (4) the ratings, if any, of items included in the cluster, (5) the purchase dates, if any, of the items in the cluster… The Sources may, for example, be selected from the highest scored clusters only, with addition ally item-specific criteria optionally used to select specific items from these clusters] in view of Col. 7, lines 29-44[The ranked list of recommended items, or an appropriately filtered version of this list (e.g., with items already purchased by the user removed), is then presented to the target user (examiner notes that according to the reference, the ranked list of items is the second group and the filtered version of that list is the third group]; Examiner notes that scoring clusters based on purchase dates of items in the cluster (comparable to historical engagement information according to ¶0051 of the applicant’s specification) and selecting the highest scored cluster is comparable to comparing clusters);
displaying the third subset of the products on one or more graphical user interface (GUIs) on the web-based marketplace based on one or more web-based searches in the web-based marketplace (Figs. 6 and 7[shows the displaying on the user GUI]; Col 11, lines 32-41[In the context of a system that Supports item sales, this item collection may, for example, include or consist of items purchased and/or rated by the target user. In the context of a news web site, the item collection may, for example, include or consist of news articles viewed (or viewed for some threshold amount of time) by the target user] in view of Col. 7, lines 29-32[In block 40, the selected source items are used to generate recommendations for the target user]); and
determining, using an impact assessment model, of the first products by (Figs. 2 and 7[shows the displaying on the user GUI]; Col. 7, lines 29-32[In block 40, the selected source items are used to generate recommendations for the target user]; Examiner notes that the generated recommendations are comparable to the first products with increased visibility on the marketplace and that the computer system performing the steps in Fig. 2 is comparable to an impact assessment model):
determining at least one benchmark for the first subset of the products based on the historical engagement information for the first subset of the products (Fig. 6; Col 11, lines 32-41[FIG. 6 illustrates a second embodiment of a process of arranging the recommended items into mutually exclusive categories or clusters. In step 80, a clustering algorithm is applied to an appropriate item collection of the target user. In the context of a system that Supports item sales, this item collection may, for example, include or consist of items purchased and/or rated by the target user. In the context of a news web site, the item collection may, for example, include or consist of news articles viewed (or viewed for some threshold amount of time) by the target user.] in view of Col. 8, lines 7-9[The clusters may be generated by applying an appropriate clustering algorithm to the user's purchase history or other collection] further in view of Col. 22, lines 56-60[The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations are intended to fall within the scope of this disclosure]; examiner notes that a threshold amount of time is comparable to a benchmark and according to ¶0051 of the applicant’s specification, historical engagement information is comparable to item viewing histories and purchase histories); and
determining a score for the third subset of the products at least one benchmark and the third subset of the products, as displayed (Col. 6, lines 21-33 [each cluster or item (or each outlier cluster or item) can be scored based on multiple criteria], col. 7, lines 1-28 [a score may be generated for each cluster, and these scores may be used to select the clusters from which the source items are obtained. The cluster scores may be based on a variety of factors, such as some or all of the following: (1) the number of items in the cluster, (2) the distance of the cluster from other clusters, (3) the cluster's homogeneity, (4) the ratings, if any, of items included in the cluster, (5) the purchase dates, if any, of the items in the cluster, (6) if applicable, the extent to which the items that the cluster contains are close to items that represent known gift purchases]) and at least one benchmark and the third subset of the products, as displayed with the increased visibility (Figs. 6 and 7[shows the displaying on the user GUI]; Col 11, lines 32-41[In the context of a system that Supports item sales, this item collection may, for example, include or consist of items purchased and/or rated by the target user. In the context of a news web site, the item collection may, for example, include or consist of news articles viewed (or viewed for some threshold amount of time) by the target user] in view of Col. 7, lines 29-32[In block 40, the selected source items are used to generate recommendations for the target user]; Examiner notes that the generated recommendations are comparable to the products with increased visibility on the marketplace; Examiner notes that a threshold amount is comparable to a benchmark).
Although Chan discloses identifying product recommendations published on a web-based marketplace, Chan does not explicitly disclose identifying first products having discoverabilities within a web-based marketplace that are low relative to discoverabilities of other products.
However, Byrne et al., hereinafter, Byrne, teaches identifying products having discoverabilities that are low relative to other products (Fig. 2; Claim 1[identifying, by the processor, a first SKU of a relatively low performing product and a second SKU of a relatively high performing product based at least in part on analysis of the sales data] in view of ¶0029[The index or performance score 206 may represent, for example, the performance of a given SKU 202 relative to other SKUs in a given category or it may provide a formulaic representation of the SKUs performance in the view of the retailer (i.e., according the analysis performed by the retailer)]; Examiner notes that performance is comparable to discoverability of a product).
Although Chan discloses displaying a subset of products on GUIs on the web-based marketplace based on web-based searches, Chan does not explicitly disclose displaying products with increased frequency.
However, Byrne teaches displaying products with increased frequency (¶0038[Yet another modification option may be referred to as a “Boost” which provides the supplier with an opportunity to pay a premium in exchange for ensuring that their SKU is ranked or positioned higher within set of search results for a period of time.]; Examiner notes that ranking higher within search results is comparable to displaying the result with increased frequency).
Although Chan discloses displaying a subset of products on GUIs on the web-based marketplace based on web-based searches, Chan does not explicitly disclose determining an impact on the discoverabilities of products based on displaying the third subset of the products with increased frequency.
However, Byrne teaches an impact on the discoverabilities of the products based on display (Fig. 2; ¶0044[As previously noted, after making modifications to one or more parameters associated with the offering and presentation of a particular SKU 202, the supplier may return to the interface 200 to monitor the SKU 202 and determine whether there has been any change in its performance by reviewing the various categories of information 204, 208, 210 and 212. The supplier may continue to do this as indicated previously with regard to FIG. 1, until they are satisfied with the sales performance of the SKU 202.] in view of ¶0038[Yet another modification option may be referred to as a “Boost” which provides the supplier with an opportunity to pay a premium in exchange for ensuring that their SKU is ranked or positioned higher within set of search results for a period of time.]; Examiner notes that ranking higher within search results is comparable to displaying the result with increased frequency).
Although Chan discloses displaying a subset of products on GUIs on the web-based marketplace based on web-based searches, Chan does not explicitly disclose determining a lift score for the third subset of the products based on the benchmark and engagement information for products as displayed with the increased frequency.
However, Byrne teaches determining a lift score for products based on engagement information (Fig. 2; ¶0029[The information may be provided to the supplier in any of a number of forms. For example, the information may be provided as raw data, such as in a table or in a trending graph (see, e.g., FIG. 6). The data and information may also be presented as analytical data such as by comparing it to other SKUs in a common category of goods, or it may be provided in the form of an index or performance score 206. The index or performance score 206 may represent, for example, the performance of a given SKU 202 relative to other SKUs in a given category or it may provide a formulaic representation of the SKUs performance in the view of the retailer (i.e., according the analysis performed by the retailer). In one particular embodiment, the index or performance score 206 may be represented as a percentage of average performance for a defined category of goods. In other words, a score of 100% indicates that the SKU is performing at least as well as the average SKU within the defined category. A score of less than 100% would indicate that the SKU is performing sub optimally, or less than average, relative to other SKUs within the goods category for the specified parameter (e.g., Impressions)] in view of ¶0038[Yet another modification option may be referred to as a “Boost” which provides the supplier with an opportunity to pay a premium in exchange for ensuring that their SKU is ranked or positioned higher within set of search results for a period of time.]; Examiner notes that a performance score is comparable to a lift score and impressions are comparable to engagement information).
The system of Byrne is applicable to the system of Chan as they share characteristics and capabilities, namely, they are both targeted to generating product recommendations online. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Chan to include products having lower discoverability than other products, lift scores, and displaying products with increased frequency as taught by Byrne. One of ordinary skill in the art would have been motivated to expand the system of Chan in order to improve the e-commerce experience including efforts to provide systems and methods that make the experience more effective and more profitable for those offering goods and services for sale (¶0006).
Although Chan discloses identifying a second subset of products that are similar to the first subset of the products based on a similarity criterion, Chan in view of Byrne does not explicitly teach identifying a subset of products from one or more cluster other than the identified cluster.
However, Kumar et al., hereinafter, Kumar, teaches identifying a subset of products from clusters other than an identified cluster (Fig. 28; ¶0120[Determining the one or more product clusters may comprise receiving (e.g., from a database stored on the server 102) a first plurality of product identifiers each sharing a common attribute. The common attribute may be based on clinical equivalence, intended use, size, quantity, a combination thereof, and the like.]).
The system of Kumar is applicable to the system of Chan in view of Byrne as they share characteristics and capabilities, namely, they are all targeted to improving search. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Chan in view of Byrne to include identifying a subset of products from one or more cluster other than the identified cluster as taught by Kumar. One of ordinary skill in the art would have been motivated to expand the system of Chan in view of Byrne in order to operate quickly and efficiently (¶0043).
Although Chan discloses determining a third subset of products by filtering the second subset of products based on comparing historical engagement information, Chan in view of Byrne in view of Kumar does not explicitly teach products that are less than an engagement threshold of an engagement for the first subset of the products.
However, Decker et al., hereinafter, Decker, teaches determining products that are less than an engagement threshold of an engagement threshold of products (Fig. 1; ¶0033[The cold start search system 102 may adjust the position of the newly added item within the search results. The cold start search system 102 may adjust the position of the newly added item within the plurality of search results based on the engagement score assigned by the system to the newly added item. For example, the cold start search system 102 may adjust the position of the newly added item so that it appears earlier (e.g., one or more positions closer to the beginning) in the search results, based on the relative value of the engagement score assigned to the newly added item compared to the engagement scores of other (e.g., historical) items in the search result.]).
The system of Decker is applicable to the system of Chan in view of Byrne in view of Kumar as they share characteristics and capabilities, namely, they are all directed to improving search. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the filtering of products as taught by Chan in view of Byrne in view of Kumar to include determining products that are within an engagement threshold as taught by Decker. One of ordinary skill in the art before the effective filing would have been motivated to expand the system of Chan in view of Byrne in view of Kumar in order to enable improved search results because the position of items that lack engagement data (e.g., newly added items) may be adjusted based on engagement data associated with similar items (abstract).
Regarding Claim 2, Chan in view of Byrne in view of Kumar in view of Decker teaches the system of claim 1, Chan further discloses wherein the historical engagement information comprises at least one of: an impression, a product view, an add-to-cart, or an order (Col. 21, lines 13-23[The system also includes a data repository 116 (e.g., one or more databases) that stores various types of user data, including identifiers of the items in each user's collection. For example, the data repository 116 may store users’ purchase histories, item viewing histories, item ratings, and item tags. The purchase histories and item viewing histories may be stored as lists of item identifiers together with associated event timestamps. The various types of user data may be accessible to other components of the system via a data service (not shown), which may be implemented as a web service]).
Regarding Claim 3, Chan in view of Byrne in view of Kumar in view of Decker teaches the system of claim 1, Chan further discloses wherein clustering of the product type into clusters according to the set of attributes further comprises:
using k-means clustering according to the set of attributes (Fig. 2; Col. 6, lines 42-52[This collection may, for example, include or consist of items the target user has purchased, rented, viewed, downloaded, rated, added to a shopping cart, or added to a wish list. The items may be products represented in an electronic catalog, or may be some other type of item (e.g., web sites) that is amenable to clustering (according to ¶0052 of the applicant’s specification, attributes are comparable to a number of reviews, an average rating, a quality score, a number of orders in a season, or a number of impressions in a season)] in view of Col. 4, lines 62-65[The clusters may be generated using any appropriate type of clustering algorithm that uses item distances to cluster items. Examples include K-means] in further in view of Col. 22, lines 56-60[The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations are intended to fall within the scope of this disclosure]; also refer to Col. 10 which discloses that each category corresponds uniquely to a cluster).
Regarding Claim 4, Chan in view of Byrne in view of Kumar in view of Decker teaches the system of claim 3, Chan further discloses wherein the set of attributes includes at least one of: a number of reviews, an average rating, a quality score, a number of orders in a season, or a number of impressions in the season (Fig. 2; Col. 6, lines 42-52[As depicted by block 30, the relevant item collection for the target user is initially retrieved. This collection may, for example, include or consist of items the target user has purchased, rented, viewed, downloaded, rated, added to a shopping cart, or added to a wish list. The items may be products represented in an electronic catalog, or may be some other type of item (e.g., web sites) that is amenable to clustering.] in view of Col. 5, lines 43-53 which disclose the consideration of the purchase dates (or seasons) when clustering, further in view of Col. 22, lines 56-60[The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations are intended to fall within the scope of this disclosure]).
Regarding Claim 6, Chan in view of Byrne in view of Kumar in view of Decker teaches the system of claim 1, Chan further discloses wherein determining the third subset of the products by filtering the second subset of the products based on the historical engagement information for the second subset of the products further comprises:
identifying the historical engagement information for the second subset of the products (Col. 21, lines 13-23[The system also includes a data repository 116 (e.g., one or more databases) that stores various types of user data, including identifiers of the items in each user's collection. For example, the data repository 116 may store users’ purchase histories, item viewing histories, item ratings, and item tags. The purchase histories and item viewing histories may be stored as lists of item identifiers together with associated event timestamps. The various types of user data may be accessible to other components of the system via a data service (not shown), which may be implemented as a web service] in view of Col. 22, lines 56-60[The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations are intended to fall within the scope of this disclosure]; according to ¶0051 of the applicant’s specification, historical engagement information is comparable to item viewing histories and purchase histories).
Regarding Claim 11, Chan discloses a method implemented via execution of computing instructions configured to run at one or more processors and configured to be stored at non-transitory computer-readable media, the method comprising (Fig. 9; Col 22, lines 31-42[The services and other application components… may be implemented in software code modules executed by any number of general purpose computers or processors, with different services optionally but not necessarily implemented on different machines interconnected by a network. The code modules may be stored in any type or types of computer storage, such as hard disk drives and solid state memory devices. The various data repositories 104, 108, 120 may similarly be implemented using any type of computer storage, and may be implemented using databases, flat files, or any other type of computer storage architecture]):
using a proactive low discoverability identification model, published on the web-based marketplace by (Figs. 1-2; Col. 7, lines 29-32[In block 40, the selected source items are used to generate recommendations for the target user]; Examiner notes that the computer system performing the steps in Fig. 2 is comparable to a proactive low discoverability identification model and that the steps are comparable to identifying product recommendations):
receiving historical engagement information for products in the web-based marketplace, wherein the historical engagement information comprises at least one of: an impression, a product view, an add-to-cart, or an order (Col. 21, lines 13-23[The system also includes a data repository 116 (e.g., one or more databases) that stores various types of user data, including identifiers of the items in each user's collection. For example, the data repository 116 may store users’ purchase histories, item viewing histories, item ratings, and item tags. The purchase histories and item viewing histories may be stored as lists of item identifiers together with associated event timestamps. The various types of user data may be accessible to other components of the system via a data service (not shown), which may be implemented as a web service]);
clustering the products of a product type into clusters according to a set of attributes, wherein the set of attributes includes at least one of: a number of reviews, an average rating, a quality score, a number of orders in a season, or a number of impressions in the season (Fig. 2; Col. 6, lines 42-52[As depicted by block 30, the relevant item collection for the target user is initially retrieved. This collection may, for example, include or consist of items the target user has purchased, rented, viewed, downloaded, rated, added to a shopping cart, or added to a wish list. The items may be products represented in an electronic catalog, or may be some other type of item (e.g., web sites) that is amenable to clustering.] in view of Col. 5, lines 43-53 which disclose the consideration of the purchase dates (or seasons) when clustering, further in view of Col. 22, lines 56-60[The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations are intended to fall within the scope of this disclosure]);
identifying a cluster of the products with a highest quantity of the products as a first subset of the products (Col. 7, lines 4-14[The cluster scores may be based on a variety of factors, such as some or all of the following: (1) the number of items in the cluster, (2) the distance of the cluster from other clusters, (3) the cluster's homogeneity, (4) the ratings, if any, of items included in the cluster, (5) the purchase dates, if any, of the items in the cluster, (6) if applicable, the extent to which the items that the cluster contains are close to items that represent known gift purchases. The Sources may, for example, be selected from the highest scored clusters only.]; examiner notes that scoring clustered based on the number of items then selecting the highest scored cluster is comparable to identifying a cluster with a highest quantity of the products);
identifying a second subset of the products, that are similar to the first subset of the products based on at least one similarity criterion (Col. 7, lines 1-18[As one example, a score may be generated for each cluster, and these scores may be used to select the clusters from which the source items are obtained. The cluster scores may be based on a variety of factors, such as some or all of the following: (1) the number of items in the cluster, (2) the distance of the cluster from other clusters, (3) the cluster's homogeneity, (4) the ratings, if any, of items included in the cluster, (5) the purchase dates, if any, of the items in the cluster, (6) if applicable, the extent to which the items that the cluster contains are close to items that represent known gift purchases. The Sources may, for example, be selected from the highest scored clusters only, with addition ally item-specific criteria optionally used to select specific items from these clusters]; Examiner notes that scoring clusters based on distance from one cluster to another (similarity) and selecting the highest scored cluster is comparable to identifying a second subset of products similar to the first subset); and
determining a third subset of the products by filtering the second subset of the products based on comparing the historical engagement information for the second subset of the products to the historical engagement information for the first subset of the products and identifying the third subset of the products as a portion of the second subset of the products (Col. 7, lines 1-18[As one example, a score may be generated for each cluster, and these scores may be used to select the clusters from which the source items are obtained. The cluster scores may be based on a variety of factors, such as some or all of the following: (1) the number of items in the cluster, (2) the distance of the cluster from other clusters, (3) the cluster's homogeneity, (4) the ratings, if any, of items included in the cluster, (5) the purchase dates, if any, of the items in the cluster… The Sources may, for example, be selected from the highest scored clusters only, with addition ally item-specific criteria optionally used to select specific items from these clusters] in view of Col. 7, lines 29-44[The ranked list of recommended items, or an appropriately filtered version of this list (e.g., with items already purchased by the user removed), is then presented to the target user (examiner notes that according to the reference, the ranked list of items is the second group and the filtered version of that list is the third group]; Examiner notes that scoring clusters based on purchase dates of items in the cluster (comparable to historical engagement information according to ¶0051 of the applicant’s specification) and selecting the highest scored cluster is comparable to comparing clusters);
displaying the third subset of the products on one or more graphical user interface (GUIs) on the web-based marketplace based on one or more web- based searches in the web-based marketplace (Figs. 6 and 7[shows the displaying on the user GUI]; Col 11, lines 32-41[In the context of a system that Supports item sales, this item collection may, for example, include or consist of items purchased and/or rated by the target user. In the context of a news web site, the item collection may, for example, include or consist of news articles viewed (or viewed for some threshold amount of time) by the target user] in view of Col. 7, lines 29-32[In block 40, the selected source items are used to generate recommendations for the target user]); and
determining, using an impact assessment model, of the first products by (Figs. 2 and 7[shows the displaying on the user GUI]; Col. 7, lines 29-32[In block 40, the selected source items are used to generate recommendations for the target user]; Examiner notes that the generated recommendations are comparable to the first products with increased visibility on the marketplace and that the computer system performing the steps in Fig. 2 is comparable to an impact assessment model):
determining at least one benchmark for the first subset of the products based on the historical engagement information for the first subset of the products (Fig. 6; Col 11, lines 32-41[FIG. 6 illustrates a second embodiment of a process of arranging the recommended items into mutually exclusive categories or clusters. In step 80, a clustering algorithm is applied to an appropriate item collection of the target user. In the context of a system that Supports item sales, this item collection may, for example, include or consist of items purchased and/or rated by the target user. In the context of a news web site, the item collection may, for example, include or consist of news articles viewed (or viewed for some threshold amount of time) by the target user.] in view of Col. 8, lines 7-9[The clusters may be generated by applying an appropriate clustering algorithm to the user's purchase history or other collection] further in view of Col. 22, lines 56-60[The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations are intended to fall within the scope of this disclosure]; examiner notes that a threshold amount of time is comparable to a benchmark and according to ¶0051 of the applicant’s specification, historical engagement information is comparable to item viewing histories and purchase histories); and
determining a score for the third subset of the products at least one benchmark and the third subset of the products, as displayed (Col. 6, lines 21-33 [each cluster or item (or each outlier cluster or item) can be scored based on multiple criteria], col. 7, lines 1-28 [a score may be generated for each cluster, and these scores may be used to select the clusters from which the source items are obtained. The cluster scores may be based on a variety of factors, such as some or all of the following: (1) the number of items in the cluster, (2) the distance of the cluster from other clusters, (3) the cluster's homogeneity, (4) the ratings, if any, of items included in the cluster, (5) the purchase dates, if any, of the items in the cluster, (6) if applicable, the extent to which the items that the cluster contains are close to items that represent known gift purchases]) and at least one benchmark and the third subset of the products, as displayed with the increased visibility (Figs. 6 and 7[shows the displaying on the user GUI]; Col 11, lines 32-41[In the context of a system that Supports item sales, this item collection may, for example, include or consist of items purchased and/or rated by the target user. In the context of a news web site, the item collection may, for example, include or consist of news articles viewed (or viewed for some threshold amount of time) by the target user] in view of Col. 7, lines 29-32[In block 40, the selected source items are used to generate recommendations for the target user]; Examiner notes that the generated recommendations are comparable to the products with increased vis