Prosecution Insights
Last updated: April 19, 2026
Application No. 17/246,179

MACHINE LEARNING BASED METHODS AND APPARATUS FOR AUTOMATICALLY GENERATING ITEM RANKINGS

Final Rejection §103
Filed
Apr 30, 2021
Examiner
GORNEY, BORIS
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Walmart Apollo LLC
OA Round
6 (Final)
40%
Grant Probability
At Risk
7-8
OA Rounds
4y 11m
To Grant
59%
With Interview

Examiner Intelligence

Grants only 40% of cases
40%
Career Allow Rate
79 granted / 200 resolved
-15.5% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
8 currently pending
Career history
208
Total Applications
across all art units

Statute-Specific Performance

§101
17.5%
-22.5% vs TC avg
§103
46.6%
+6.6% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
19.1%
-20.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 200 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is responsive to the amendment to the original application. This action is Final. Claims 1-20 are pending and have been examined. Response to Amendments In the reply filed 12/20/2024, claims 1, 10 and 16 were amended. Accordingly, claims 1 – 20 are pending. Response to Arguments Applicant's arguments with respect to claims 1 – 20 have been carefully considered but are not deemed persuasive in view of rejections below. Examiner has added new grounds of rejections necessitated by the amendments. See detailed rejections below. All claims have been updated below with clarifying prior art citations. Kindly let me know if you have any questions. Thanks. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sarma et al., U.S. Patent No.: 8,645,221 (Hereinafter “Sarma”), and further in view of Comar et al., U.S. Patent No.: 10,776,847 (Hereinafter “Comar”), and further in view of Ghavamzadeh et al., U.S. Patent Application Publication No.: 2016/0283970 (Hereinafter “Ghavamzadeh”). Regarding claim 1, Sarma teaches, a system comprising: a computing device configured to (Sarma [Col. 1 line 57]: computing device): receive user session data for a plurality of users (Sarma [Col. 4 lines 8 – 19]: In such case, users may employ their virtual shopping cart 183 not just for purchasing items in a given session, but for ultimately designating items 136 to purchase sometime in the future in other sessions. In this respect, a "session" is defined as a time within which a customer interacts with the network site 126 using the client device 106, or maintains a connection with the network site 126 during a single log on.); generate, based on the user session data (Sarma [Col. 03 line 16 – line 20]: “A customer may proceed to a checkout … virtual shopping cart 183 during a given session.”), user engagement data (Sarma [Col. 2 lines 51 – 52]: Associated with each customer account 159 are customer information 173, a browse history 176, a purchase history 179, a virtual shopping cart 183, and potentially other data.) characterizing engagements of one or more corresponding items (Sarma [Col. 4 lines 8 – 19]: During interaction with the network site 126 facilitated by the electronic commerce system 123, a customer may include various items 136 in their virtual shopping cart 183 in order to purchase such items 136. In some cases, items 136 may be added to or stored in a virtual shopping cart 183, but then not purchased. In such case, users may employ their virtual shopping cart 183 not just for purchasing items in a given session, but for ultimately designating items 136 to purchase sometime in the future in other sessions.) for each of a plurality of queries (Sarma [Col. 7 lines 53 – 57]: “To this end, the search terms are entered in a search text box 203 in order to execute the search, where such search terms are shown in the search text box 203 in the results network page 139a to inform the customer of the search performed.” Here, search terms are search queries.); determine, based on the user session data, a number of examines for each of the one or more corresponding items for each of the plurality of queries (Sarma [Col. 02 line 35 – line 47]: The electronic commerce system 123 is configured to facilitate selection of items 136 from an item catalog 143. The items 136 in the item catalog 143 may be organized into various item categories 146. The electronic commerce system 123 is configured to facilitate the viewing, selection, and purchase of items 136. From time to time, the electronic commerce system 123 may identify a subset of the items 136 that are presented to a customer for various purposes as will be described. In one embodiment, the item ranking process 129 is executed as a portion of the electronic commerce system 123 in order to facilitate a ranking of such a subset of items 136 as a function of virtual shopping cart activity with respect to such items 136 as will be described.”); Sarma doesn’t clearly teach, normalize the user engagement data for each of the one or more corresponding items for each of the plurality of queries based on the corresponding number of examines; However, Comar [Col. 04 line 46 – line 59] teaches, “Approaches in accordance with various embodiments can attempt to factor the observed performance (e.g., click) data into user intent and content relevance. This can be performed in at least some embodiments using a matrix factorization approach derived from a probability distribution, such as a Poisson-Beta or Poisson-Gamma generative model. Various approaches can also attempt to smooth different types of bias across queries, as each bias type may not be present in every individual query. Further, within intent bias there can be various types of intent that are determined and utilized to improve relevance. These can include, for example, action (e.g., purchase or consume), explore (e.g., navigate or obtain information), ambiguity, task complete, decision making, and the like.” wherein the user engagement data is normalized based on: generating, for each of the one or more corresponding items for each of the plurality of queries (Comar [Col. 8 line 65 – Col. 9 line 5]: In some embodiments, information for the request might also be compared against user data in a user data store 312 or other such location do determine, for example, whether the user has access rights to that content. In some embodiments user data might also be analyzed to determine which type of content to provide, additional supplemental content to provide with the primary content, and the like.), a first normalized order-through rate (OTR) based on examines and orders received for the corresponding item (Comar [Col. 7 line 52]: purchase rate; and Comar [Col. 2 line 13]: click through rate (CTR)): receiving a user-selected percentile, generating a Beta distribution for a random variable representing OTR with two parameters (Comar [Col. 7 lines 37 – 60]: The first intent (of two intents) from the proposed multi-intent Poisson-Beta model gave the highest mean reciprocal rank across all the query categories. In particular, the first intent outperforms all other relevance measures when the query length increases. Here, the intents correspond to an action intent and an explore intent as explained elsewhere herein. Users having an action intent typically issue a pointed query (in the case of a generic query like fishing, users with a purchase intent tend to refine the search results with additional category filters to narrow the choice of retrieved items) and the purchases typically happen from the first few positions. The purchase rate rapidly drops thereafter. Users with an explore intent tend to move between pages, clicking (exploring) on items until they settle for an item of their liking. A distinguishing aspect of the action intent estimated from the proposed model is that the action intent decreases more rapidly than the position bias estimated by the baselines algorithms like SI-Gamma and SI-Beta, which in turn impacts the estimated item relevance.), wherein one of the two parameters is determined based on a first number of the orders received for the corresponding item (Comar [Col. 12 line 54 – Col. 13 line 33]: The content can include content for items to be recommended to a customer, items corresponding to a search request, or items to be suggested as a set of deals to a number or set of customers, among other such options. … The performance values to be normalized can depend at least in part upon the rules or policies for the deals, such as to rank or prioritize based on clicks, cart adds, or purchases, among other such options. … Once the normalized values are obtained, the content can be ranked 610 by the normalized values. This can include, as discussed elsewhere herein, generating an ordered ranking and then selecting at least a subset of highest-ranked content items to be selected for display. The display positions for the selected content can then be determined 612 based at least in part upon the ranking. Since certain areas or display positions will be most likely to result in an action for a determined intent, the highest ranked instance of content can be placed in the most likely action position for a specific intent, followed by the next highest ranked instance of content, and so on. For an action intent this can involve placing the highest ranked items in the first few results, while for an explore intent this can involve placing items after the first few results and scattered over the next several results, among other such options as may be determined using the trained models.), wherein the other of the two parameters is determined based on a second number of the examines received for the corresponding item minus the first number of the orders received for the corresponding item (Comar [Col. 6 lines 20 – 67]: Here, the clicks an item i receives at a position p is an aggregation of clicks arising from the multiple intents, where each intent is a Poisson random variable with the rate P.sub.ipk. Since the sum of Poisson random variables is a Poisson with a rate parameter being the simple sum of all rates, the model can become: … The update equations for r.sub.ik and b.sub.pk are interdependent or cyclic, requiring multiple iterations to converge to an optimal value. At any iteration, if the optimal value for one of the parameters is reached, then the other parameter can be estimated with good accuracy. In particular, starting with a good estimate of (b.sub.pk) results in an accurate estimate for r.sub.ik.), computing, according to the user-selected percentile, a percentile point of the random variable representing OTR in the Beta distribution (Comar [Col. 7 lines 41 – 55]: The first intent (of two intents) from the proposed multi-intent Poisson-Beta model gave the highest mean reciprocal rank across all the query categories. In particular, the first intent outperforms all other relevance measures when the query length increases. Here, the intents correspond to an action intent and an explore intent as explained elsewhere herein. Users having an action intent typically issue a pointed query (in the case of a generic query like fishing, users with a purchase intent tend to refine the search results with additional category filters to narrow the choice of retrieved items) and the purchases typically happen from the first few positions. The purchase rate rapidly drops thereafter. Users with an explore intent tend to move between pages, clicking (exploring) on items until they settle for an item of their liking.), and generating a second normalized OTR as a value equal to the percentile point of the random variable representing OTR (Comar [Col. 7 line 66 – Col. 8 line 12]: If the example plot 200 of FIG. 2A, a first intent profile 202 corresponds to an action intent, and a second intent profile 204 corresponds to an explore intent. The query “fishing” corresponds to a generic word which retrieves content for a variety of items from several different categories, including fishing gadgets or equipment, fishing related apparel, fishing related books, and the like. This particular query has a very strong exploratory intent and in practice it has been determined that the users have clicked items placed at several different positions across multiple pages. As illustrated, the action intent starts off with a very high (smoothed) click through rate for the initial positions then falls off rapidly until after the fiftieth position there is very little CTR at any position.); generate ranking data characterizing a ranking of at least a subset of the plurality of items based on the normalized user engagement data (Comar [Col. 4 lines 34 – 45]: In some situations the ranking is based at least in part upon an estimated relevance or quality of the deal with respect to a certain page, query, type of user, or other such factor. These values are often determined at least in part by monitoring or tracking the number of actions taken by users with respect to displays of each respective deal (or similar types of deals). In many cases, this can include a click through rate (CTR) or other such metric that identifies, for example, the frequency with which users clicked on content for a displayed deal when that deal was displayed to a user (or at least on displayed on a display screen or other such mechanism).); and train a machine learning model based on the ranking data (Comar [Col. 9 lines 26 – 61]: As mentioned, this can include training for various types of bias, such as position and intent bias. In this example the content provider environment 306 will at least include a bias model training component or service that includes intent logic 320 for determining intent and training a bias model using the determined intent data. … For example, some types of content might be ranked based on purchases, while other types might be ranked based on views or clicks, or combinations thereof. … In some embodiments, the intent data and training of the bias model might be performed by an external intent service 324 or system, which can have access to at least the performance data in the performance data store 322 in order to train the appropriate bias model and provide the bias-adjusted relevance values for use by the content manager 310 and/or search engine 316 in selecting and/or ranking content.). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to incorporate the teaching of Sarma et al. to the Comar’s system by adding the feature of user engagement data / user intent. The references (Sarma and Comar) teach features that are analogous art and they are directed to the same field of endeavor, such as e-commerce. Ordinary skilled artisan would have been motivated to do so to provide Sarma’s system with enhanced user data. (See Comar [Abstract], [Col. 04 line 46 – line 59], [Col. 4 lines 34 – 45], [Col. 9 lines 26 – 61]). One of the biggest advantages of network machine learning database algorithms is their ability to improve over time. Machine learning technology typically improves efficiency and accuracy thanks to the ever-increasing amounts of data that are processed. Comar does not clearly teach, wherein the first normalized OTR is equal to an expectation of the random variable representing OTR in the Beta distribution; However, Ghavamzadeh [0094] teaches, “… random variable whose expectation is donated by c(χ, α)=E [C(χ, α)]; P(.Math.|x, a) is the transition probability distribution;” Furthermore, Ghavamzadeh [0190] teaches, “ The method 800 includes an act 802 of receiving a risk-tolerance value β. In particular, act 802 can involve receiving a threshold tolerance value for an ad serving campaign. For example, act 802 can involve receiving a risk threshold that a marketer is willing to allow the ad serving campaign to perform. The risk-tolerance value β can comprise a variety of forms, including in a business outcome (such as revenue or sales), consumer behavior (such as clicks, selections, or purchases), a threshold click-thru rate, or a statistical measure (such as CVaR, standard deviations, value-at-risk, etc.). Act 802 can optionally involve receiving or identifying a confidence level α for the risk-tolerance value β. For example, act 802 can involve identifying a default value confidence level α, such as for example 95%. Alternatively, act 802 can involve receiving input from a marketer that adjusts the default confidence level α or otherwise provides the confidence level α.” It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to incorporate the teaching of Sarma et al. and Comar et al. to the Ghavamzadeh’s system by adding the feature of random variable data. The references (Sarma, Comar and Ghavamzadeh) teach features that are analogous art and they are directed to the same field of endeavor, such as e-commerce. Ordinary skilled artisan would have been motivated to do so to provide Sarma and Comar’s system with enhanced user data. (See Ghavamzadeh [Abstract], [0094], [0190]). One of the biggest advantages of network machine learning database algorithms is their ability to improve over time. Machine learning technology typically improves efficiency and accuracy thanks to the ever-increasing amounts of data that are processed. Regarding claim 2, the system of claim 1, wherein the computing device is configured to: generate training features based on the user engagement data for each of the one or more corresponding items of each of the plurality of queries; and generate training labels based on the ranking data; wherein the machine learning model is trained based on the training features and the training labels (Comar [Col. 9 lines 26 – 61]: As mentioned, this can include training for various types of bias, such as position and intent bias. In this example the content provider environment 306 will at least include a bias model training component or service that includes intent logic 320 for determining intent and training a bias model using the determined intent data. … For example, some types of content might be ranked based on purchases, while other types might be ranked based on views or clicks, or combinations thereof. … In some embodiments, the intent data and training of the bias model might be performed by an external intent service 324 or system, which can have access to at least the performance data in the performance data store 322 in order to train the appropriate bias model and provide the bias-adjusted relevance values for use by the content manager 310 and/or search engine 316 in selecting and/or ranking content.). Regarding claim 3, the system of claim 2, wherein the machine learning model is based on Gradient Boosted Trees (Comar [Col. 10 line 43 – line 61]: “The intent logic 320 can collect data for each keyword of a set of keywords across all decisions that were made, such as search results for received search queries, and can determine the content that was retrieved and provided in response to those decisions. The intent logic 320 can also determine the relative positions in which those instances of content were placed, as well as performance data for the instances of content in each position. As mentioned, the performance data can include data such as clicks, views, purchases, consumptions, and other specified actions. As mentioned, the performance by position will vary based on intent, so this data can be used to model the impact of intent on performance for the various keywords, and based on the intent and performance data the true relevance of an instance of content can be determined for a specific keyword or query. The bias model presented above can be trained using the performance data and related data to estimate the true relevance for each type of intent and/or with the intent bias removed, among other such options.”). Regarding claim 4, the system of claim 1, wherein determining the number of examines comprises determining a number of item clicks, a number of add-to-carts, and a number of item orders for each of the one or more corresponding items (Comar [Col. 02 line 41 – line 55]: “Content to be displayed in an environment such as an electronic marketplace, content will often be selected based upon some measure of relevance. This can include, for example, relevance to a submitted query, relevance to a page to be displayed, or relevance to a user session in which the content is to be displayed. When determining which of the relevant items to display, however, the system might look to various performance metrics in order to display content that is most likely to result in a conversion, or an action such as a view, click, add-to-cart, or purchase, among other such actions. This can be particularly useful for displays such as recommendations and deals or special offers. Presenting content that is selected and arranged based on performance metrics can help to maximize the potential revenue to be obtained from the display.”). Regarding claim 5, the system of claim 4, wherein normalizing the user engagement data comprises: computing an OTR, and a CTR based on the corresponding number of item orders, number of add-to-carts, and number of item clicks, respectively (Comar [Col. 2 lines 5 – 20]: “These arrangements are often used to display content such as search results, recommendations, and deals or offers for various items available for consumption. Each instance of a particular piece of content being displayed is referred to as an impression, and the performance of an instance of content can be determined based upon the number of specified actions taken per number of impressions. For example, a common performance metric is a click through rate (CTR), which is generally a percentage of the number of impressions that resulted in a user “clicking” on (or otherwise selecting) an instance of content in order to, for example, obtain additional information. Performance metrics can also include the number of resulting purchases per impression, the number of times an item is added to a virtual shopping cart per impression, and the like.”). Regarding claim 6, the system of claim 1, wherein the user-selected percentile is a 5th percentile (Sarma [Col. 5 lines 4 – 7]: “In one embodiment, one factor to consider in generating a score may comprise the percentage of times that a given item 136 has been added to a virtual shopping cart 183 when such item 136 is viewed in a network page 139.); Regarding claim 7, the system of claim 5, wherein generating the ranking data comprises: determining a descending order of the second normalized OTRs for the one or more corresponding items for each of the plurality of queries (Comar [Col. 13 lines 2 – 19]: “If bias information is available, such as whether a current user or query is associated with a specific intent, or a specific intent can be determined, then the appropriate trained bias function or model (e.g., a corresponding intent model) can be used to determine 608 one or more normalized performance values for each instance of content, or at least a subset of the content. The performance values to be normalized can depend at least in part upon the rules or policies for the deals, such as to rank or prioritize based on clicks, cart adds, or purchases, among other such options. As mentioned elsewhere herein, performance values can also relate to responses that were deemed to have properly answered questions, actions that were determined to complete specified tasks, interactions, gathering of information, and the like. Once the normalized values are obtained, the content can be ranked 610 by the normalized values.”); and ranking the at least subset of the plurality of items based on the descending order of the second normalized OTRs (Comar [Col. 13 lines 17 – line 36]: “Once the normalized values are obtained, the content can be ranked 610 by the normalized values. This can include, as discussed elsewhere herein, generating an ordered ranking and then selecting at least a subset of highest-ranked content items to be selected for display. The display positions for the selected content can then be determined 612 based at least in part upon the ranking. Since certain areas or display positions will be most likely to result in an action for a determined intent, the highest ranked instance of content can be placed in the most likely action position for a specific intent, followed by the next highest ranked instance of content, and so on. For an action intent this can involve placing the highest ranked items in the first few results, while for an explore intent this can involve placing items after the first few results and scattered over the next several results, among other such options as may be determined using the trained models. In some embodiments, the type of arrangement used for the display can also be determined based at least in part upon the normalized performance values and other such information.”). Regarding claim 8, the system of claim 7, wherein generating the ranking data comprises: determining that at least two of the at least subset of the plurality of items have second normalized OTRs with a difference within a threshold; and ranking the at least two of the at least subset of the plurality of items based on their corresponding ATRs (Sarma [Col. 02 line 21 – line 34]: “The electronic commerce system 123 is configured to facilitate selection of items 136 from an item catalog 143. The items 136 in the item catalog 143 may be organized into various item categories 146. The electronic commerce system 123 is configured to facilitate the viewing, selection, and purchase of items 136. From time to time, the electronic commerce system 123 may identify a subset of the items 136 that are presented to a customer for various purposes as will be described. In one embodiment, the item ranking process 129 is executed as a portion of the electronic commerce system 123 in order to facilitate a ranking of such a subset of items 136 as a function of virtual shopping cart activity with respect to such items 136 as will be described.”). Regarding claim 9, the system of claim 1, wherein determining the number of examines for each of the one or more corresponding items for each of the plurality of queries comprises: determining, based on the user engagement data for each of the one or more corresponding items of each of the plurality of queries, an engaged item appearing last in a search result listing of each of the plurality of queries (Comar [Col. 03 line 40 – line 48]: “In FIGS. 1A and 1B illustrate example displays 100, 150 of content that can be presented in accordance with various embodiments. The example display 100 of FIG. 1A illustrates a set of search results 104 presented for a submitted query 102, in this example the query “fishing.” As known for such displays, the search query can be received to a search field that can cause related items to be located and displayed as a list of search results that are typically ranked by relevance.”); determining any of the one or more corresponding items that appear in the search result listing before the engaged item appearing last; and determining that the engaged item appearing last and any of the one or more corresponding items that appear in the search result listing before the engaged item appearing last are examined (Comar [Col. 09 line 06 – line 22]: “If the request is a request for search results, for example, information for a received query can be directed by components of the interface layer 308 to a search engine 316 that is able to utilize data from an index 318 to determine the appropriate search results. As known for search engines, the search engine 316 may be configured to crawl the content data store 314 or other such data sources in order to index that data, in order to facilitate the rapid and accurate location of search results from the search index 318 in response to a received query. The provided search results in at least some embodiments can then provide links to content stored in the content data store 314. In some embodiments lists or selections of content can also be provided for various categorization pages, or other collections of content, where those collections or selections are determined based on relevance or other criteria for a specific category or type of content, among other such options.). Regarding claim 10, Sarma teaches, a method comprising: receiving user session data for a plurality of users (Sarma [Col. 4 lines 8 – 19]: In such case, users may employ their virtual shopping cart 183 not just for purchasing items in a given session, but for ultimately designating items 136 to purchase sometime in the future in other sessions. In this respect, a "session" is defined as a time within which a customer interacts with the network site 126 using the client device 106, or maintains a connection with the network site 126 during a single log on.); generate, based on the user session data (Sarma [Col. 03 line 16 – line 20]: “A customer may proceed to a checkout … virtual shopping cart 183 during a given session.”), user engagement data (Sarma [Col. 2 lines 51 – 52]: Associated with each customer account 159 are customer information 173, a browse history 176, a purchase history 179, a virtual shopping cart 183, and potentially other data.) characterizing engagements of one or more corresponding items (Sarma [Col. 4 lines 8 – 19]: During interaction with the network site 126 facilitated by the electronic commerce system 123, a customer may include various items 136 in their virtual shopping cart 183 in order to purchase such items 136. In some cases, items 136 may be added to or stored in a virtual shopping cart 183, but then not purchased. In such case, users may employ their virtual shopping cart 183 not just for purchasing items in a given session, but for ultimately designating items 136 to purchase sometime in the future in other sessions.) for each of a plurality of queries (Sarma [Col. 7 lines 53 – 57]: “To this end, the search terms are entered in a search text box 203 in order to execute the search, where such search terms are shown in the search text box 203 in the results network page 139a to inform the customer of the search performed.” Here, search terms are search queries.); determining, based on the user session data, a number of examines for each of the one or more corresponding items for each of the plurality of queries (Sarma [Col. 02 line 35 – line 47]: The electronic commerce system 123 is configured to facilitate selection of items 136 from an item catalog 143. The items 136 in the item catalog 143 may be organized into various item categories 146. The electronic commerce system 123 is configured to facilitate the viewing, selection, and purchase of items 136. From time to time, the electronic commerce system 123 may identify a subset of the items 136 that are presented to a customer for various purposes as will be described. In one embodiment, the item ranking process 129 is executed as a portion of the electronic commerce system 123 in order to facilitate a ranking of such a subset of items 136 as a function of virtual shopping cart activity with respect to such items 136 as will be described.”); Sarma doesn’t clearly teach, normalizing the user engagement data for each of the one or more corresponding items for each of the plurality of queries based on the corresponding number of examines; However, Comar [Col. 04 line 46 – line 59] teaches, “Approaches in accordance with various embodiments can attempt to factor the observed performance (e.g., click) data into user intent and content relevance. This can be performed in at least some embodiments using a matrix factorization approach derived from a probability distribution, such as a Poisson-Beta or Poisson-Gamma generative model. Various approaches can also attempt to smooth different types of bias across queries, as each bias type may not be present in every individual query. Further, within intent bias there can be various types of intent that are determined and utilized to improve relevance. These can include, for example, action (e.g., purchase or consume), explore (e.g., navigate or obtain information), ambiguity, task complete, decision making, and the like.” wherein the user engagement data is normalized based on: generating, for each of the one or more corresponding items for each of the plurality of queries (Comar [Col. 8 line 65 – Col. 9 line 5]: In some embodiments, information for the request might also be compared against user data in a user data store 312 or other such location do determine, for example, whether the user has access rights to that content. In some embodiments user data might also be analyzed to determine which type of content to provide, additional supplemental content to provide with the primary content, and the like.), a first normalized order-through rate (OTR) based on examines and orders received for the corresponding item (Comar [Col. 7 line 52]: purchase rate; and Comar [Col. 2 line 13]: click through rate (CTR)), receiving a user-selected percentile, generating a Beta distribution for a random variable representing OTR with two parameters (Comar [Col. 7 lines 37 – 60]: In order to test the performance of the intent-based relevance estimation algorithm on a search dataset, a set of approximately 50,000 queries was utilized under a specific category. The proposed multi-intent model with two intents gives the best result on this data set. The first intent (of two intents) from the proposed multi-intent Poisson-Beta model gave the highest mean reciprocal rank across all the query categories. In particular, the first intent outperforms all other relevance measures when the query length increases. Here, the intents correspond to an action intent and an explore intent as explained elsewhere herein. Users having an action intent typically issue a pointed query (in the case of a generic query like fishing, users with a purchase intent tend to refine the search results with additional category filters to narrow the choice of retrieved items) and the purchases typically happen from the first few positions. The purchase rate rapidly drops thereafter. Users with an explore intent tend to move between pages, clicking (exploring) on items until they settle for an item of their liking. A distinguishing aspect of the action intent estimated from the proposed model is that the action intent decreases more rapidly than the position bias estimated by the baselines algorithms like SI-Gamma and SI-Beta, which in turn impacts the estimated item relevance.), wherein one of the two parameters is determined based on a first number of the orders received for the corresponding item (Comar [Col. 12 line 54 – Col. 13 line 33]: The content can include content for items to be recommended to a customer, items corresponding to a search request, or items to be suggested as a set of deals to a number or set of customers, among other such options. … The performance values to be normalized can depend at least in part upon the rules or policies for the deals, such as to rank or prioritize based on clicks, cart adds, or purchases, among other such options. … Once the normalized values are obtained, the content can be ranked 610 by the normalized values. This can include, as discussed elsewhere herein, generating an ordered ranking and then selecting at least a subset of highest-ranked content items to be selected for display. The display positions for the selected content can then be determined 612 based at least in part upon the ranking. Since certain areas or display positions will be most likely to result in an action for a determined intent, the highest ranked instance of content can be placed in the most likely action position for a specific intent, followed by the next highest ranked instance of content, and so on. For an action intent this can involve placing the highest ranked items in the first few results, while for an explore intent this can involve placing items after the first few results and scattered over the next several results, among other such options as may be determined using the trained models.), wherein the other of the two parameters is determined based on a second number of the examines received for the corresponding item minus the first number of the orders received for the corresponding item (Comar [Col. 6 lines 20 – 67]: Here, the clicks an item i receives at a position p is an aggregation of clicks arising from the multiple intents, where each intent is a Poisson random variable with the rate P.sub.ipk. Since the sum of Poisson random variables is a Poisson with a rate parameter being the simple sum of all rates, the model can become: … The update equations for r.sub.ik and b.sub.pk are interdependent or cyclic, requiring multiple iterations to converge to an optimal value. At any iteration, if the optimal value for one of the parameters is reached, then the other parameter can be estimated with good accuracy. In particular, starting with a good estimate of (b.sub.pk) results in an accurate estimate for r.sub.ik.), computing, according to the user-selected percentile, a percentile point of the random variable representing OTR in the Beta distribution (Comar [Col. 7 lines 41 – 55]: The first intent (of two intents) from the proposed multi-intent Poisson-Beta model gave the highest mean reciprocal rank across all the query categories. In particular, the first intent outperforms all other relevance measures when the query length increases. Here, the intents correspond to an action intent and an explore intent as explained elsewhere herein. Users having an action intent typically issue a pointed query (in the case of a generic query like fishing, users with a purchase intent tend to refine the search results with additional category filters to narrow the choice of retrieved items) and the purchases typically happen from the first few positions. The purchase rate rapidly drops thereafter. Users with an explore intent tend to move between pages, clicking (exploring) on items until they settle for an item of their liking.), and generating a second normalized OTR as a value equal to the percentile point of the random variable representing OTR (Comar [Col. 7 line 66 – Col. 8 line 12]: If the example plot 200 of FIG. 2A, a first intent profile 202 corresponds to an action intent, and a second intent profile 204 corresponds to an explore intent. The query “fishing” corresponds to a generic word which retrieves content for a variety of items from several different categories, including fishing gadgets or equipment, fishing related apparel, fishing related books, and the like. This particular query has a very strong exploratory intent and in practice it has been determined that the users have clicked items placed at several different positions across multiple pages. As illustrated, the action intent starts off with a very high (smoothed) click through rate for the initial positions then falls off rapidly until after the fiftieth position there is very little CTR at any position.); generating ranking data characterizing a ranking of at least a subset of the plurality of items based on the normalized user engagement data (Comar [Col. 4 lines 34 – 45]: In some situations the ranking is based at least in part upon an estimated relevance or quality of the deal with respect to a certain page, query, type of user, or other such factor. These values are often determined at least in part by monitoring or tracking the number of actions taken by users with respect to displays of each respective deal (or similar types of deals). In many cases, this can include a click through rate (CTR) or other such metric that identifies, for example, the frequency with which users clicked on content for a displayed deal when that deal was displayed to a user (or at least on displayed on a display screen or other such mechanism.); and training a machine learning model based on the ranking data (Comar [Col. 9 lines 26 – 61]: As mentioned, this can include training for various types of bias, such as position and intent bias. In this example the content provider environment 306 will at least include a bias model training component or service that includes intent logic 320 for determining intent and training a bias model using the determined intent data. … For example, some types of content might be ranked based on purchases, while other types might be ranked based on views or clicks, or combinations thereof. … In some embodiments, the intent data and training of the bias model might be performed by an external intent service 324 or system, which can have access to at least the performance data in the performance data store 322 in order to train the appropriate bias model and provide the bias-adjusted relevance values for use by the content manager 310 and/or search engine 316 in selecting and/or ranking content.). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to incorporate the teaching of Sarma et al. to the Comar’s system by adding the feature of user engagement data / user intent. The references (Sarma and Comar) teach features that are analogous art and they are directed to the same field of endeavor, such as e-commerce. Ordinary skilled artisan would have been motivated to do so to provide Sarma’s system with enhanced user data. (See Comar [Abstract], [Col. 04 line 46 – line 59], [Col. 4 lines 34 – 45], [Col. 9 lines 26 – 61]). One of the biggest advantages of network machine learning database algorithms is their ability to improve over time. Machine learning technology typically improves efficiency and accuracy thanks to the ever-increasing amounts of data that are processed. Comar does not clearly teach, wherein the first normalized OTR is equal to an expectation of the random variable representing OTR in the Beta distribution; However, Ghavamzadeh [0094] teaches, “… random variable whose expectation is donated by c(χ, α)=E [C(χ, α)]; P(.Math.|x, a) is the transition probability distribution;” Furthermore, Ghavamzadeh [0190] teaches, “ The method 800 includes an act 802 of receiving a risk-tolerance value β. In particular, act 802 can involve receiving a threshold tolerance value for an ad serving campaign. For example, act 802 can involve receiving a risk threshold that a marketer is willing to allow the ad serving campaign to perform. The risk-tolerance value β can comprise a variety of forms, including in a business outcome (such as revenue or sales), consumer behavior (such as clicks, selections, or purchases), a threshold click-thru rate, or a statistical measure (such as CVaR, standard deviations, value-at-risk, etc.). Act 802 can optionally involve receiving or identifying a confidence level α for the risk-tolerance value β. For example, act 802 can involve identifying a default value confidence level α, such as for example 95%. Alternatively, act 802 can involve receiving input from a marketer that adjusts the default confidence level α or otherwise provides the confidence level α.” It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to incorporate the teaching of Sarma et al. and Comar et al. to the Ghavamzadeh’s system by adding the feature of random variable data. The references (Sarma, Comar and Ghavamzadeh) teach features that are analogous art and they are directed to the same field of endeavor, such as e-commerce. Ordinary skilled artisan would have been motivated to do so to provide Sarma and Comar’s system with enhanced user data. (See Ghavamzadeh [Abstract], [0094], [0190]). One of the biggest advantages of network machine learning database algorithms is their ability to improve over time. Machine learning technology typically improves efficiency and accuracy thanks to the ever-increasing amounts of data that are processed. Regarding claim 11, the method of claim 10, comprising: generating training features based on the user engagement data for each of the one or more corresponding items of each of the plurality of queries; and generating training labels based on the ranking data; wherein the machine learning model is trained based on the training features and the training labels (Comar [Col. 9 lines 26 – 61]: As mentioned, this can include training for various types of bias, such as position and intent bias. In this example the content provider environment 306 will at least include a bias model training component or service that includes intent logic 320 for determining intent and training a bias model using the determined intent data. … For example, some types of content might be ranked based on purchases, while other types might be ranked based on views or clicks, or combinations thereof. … In some embodiments, the intent data and training of the bias model might be performed by an external intent service 324 or system, which can have access to at least the performance data in the performance data store 322 in order to train the appropriate bias model and provide the bias-adjusted relevance values for use by the content manager 310 and/or search engine 316 in selecting and/or ranking content.). Regarding claim 12, the method of claim 10, wherein determining the number of examines comprises determining a number of item clicks, a number of add-to-carts, and a number of item orders for each of the one or more corresponding items (Comar [Col. 02 line 41 – line 55]: “Content to be displayed in an environment such as an electronic marketplace, content will often be selected based upon some measure of relevance. This can include, for example, relevance to a submitted query, relevance to a page to be displayed, or relevance to a user session in which the content is to be displayed. When determining which of the relevant items to display, however, the system might look to various performance metrics in order to display content that is most likely to result in a conversion, or an action such as a view, click, add-to-cart, or purchase, among other such actions. This can be particularly useful for displays such as recommendations and deals or special offers. Presenting content that is selected and arranged based on performance metrics can help to maximize the potential revenue to be obtained from the display.”). Regarding claim 13, the method of claim 12, wherein normalizing the user engagement data comprises: computing an OTR, and ATR, and a CTR based on the corresponding number of item orders, number of add-to-carts, and number of item clicks, respectively (Comar [Col. 2 lines 5 – 20]: “These arrangements are often used to display content such as search results, recommendations, and deals or offers for various items available for consumption. Each instance of a particular piece of content being displayed is referred to as an impression, and the performance of an instance of content can be determined based upon the number of specified actions taken per number of impressions. For example, a common performance metric is a click through rate (CTR), which is generally a percentage of the number of impressions that resulted in a user “clicking” on (or otherwise selecting) an instance of content in order to, for example, obtain additional information. Performance metrics can also include the number of resulting purchases per impression, the number of times an item is added to a virtual shopping cart per impression, and the like.”). Regarding claim 14, the method of claim 10, wherein the user-selected percentile is a 5th percentile (Sarma [Col. 5 lines 4 – 7]: “In one embodiment, one factor to consider in generating a score may comprise the percentage of times that a given item 136 has been added to a virtual shopping cart 183 when such item 136 is viewed in a network page 139.). Regarding claim 15, the method of claim 10, wherein determining the number of examines for each of the one or more corresponding items for each of the plurality of queries comprises: determining, based on the user engagement data for each of the one or more corresponding items of each of the plurality of queries, an engaged item appearing last in a search result listing of each of the plurality of queries (Comar [Col. 03 line 40 – line 48]: “In FIGS. 1A and 1B illustrate example displays 100, 150 of content that can be presented in accordance with various embodiments. The example display 100 of FIG. 1A illustrates a set of search results 104 presented for a submitted query 102, in this example the query “fishing.” As known for such displays, the search query can be received to a search field that can cause related items to be located and displayed as a list of search results that are typically ranked by relevance.”); determining any of the one or more corresponding items that appear in the search result listing before the engaged item appearing last; and determining that the engaged item appearing last and any of the one or more corresponding items that appear in the search result listing before the engaged item appearing last are examined (Comar [Col. 09 line 06 – line 22]: “If the request is a request for search results, for example, information for a received query can be directed by components of the interface layer 308 to a search engine 316 that is able to utilize data from an index 318 to determine the appropriate search results. As known for search engines, the search engine 316 may be configured to crawl the content data store 314 or other such data sources in order to index that data, in order to facilitate the rapid and accurate location of search results from the search index 318 in response to a received query. The provided search results in at least some embodiments can then provide links to
Read full office action

Prosecution Timeline

Apr 30, 2021
Application Filed
Nov 14, 2022
Non-Final Rejection — §103
Feb 03, 2023
Response Filed
May 18, 2023
Final Rejection — §103
Jul 13, 2023
Interview Requested
Jul 21, 2023
Examiner Interview Summary
Jul 21, 2023
Applicant Interview (Telephonic)
Jul 25, 2023
Request for Continued Examination
Jul 30, 2023
Response after Non-Final Action
Oct 21, 2023
Non-Final Rejection — §103
Jan 30, 2024
Interview Requested
Feb 05, 2024
Applicant Interview (Telephonic)
Feb 05, 2024
Response Filed
Feb 05, 2024
Examiner Interview Summary
Jun 05, 2024
Final Rejection — §103
Aug 29, 2024
Interview Requested
Sep 05, 2024
Examiner Interview Summary
Sep 05, 2024
Applicant Interview (Telephonic)
Sep 05, 2024
Request for Continued Examination
Sep 11, 2024
Response after Non-Final Action
Sep 14, 2024
Non-Final Rejection — §103
Nov 18, 2024
Interview Requested
Dec 12, 2024
Applicant Interview (Telephonic)
Dec 14, 2024
Examiner Interview Summary
Dec 20, 2024
Response Filed
Aug 28, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12477089
DYNAMIC ADAPTATION OF IMAGES FOR PROJECTION, AND/OR OF PROJECTION PARAMETERS, BASED ON USER(S) IN ENVIRONMENT
2y 5m to grant Granted Nov 18, 2025
Patent 12235820
MANAGING MULTI-TENANT KEYS IN MULTI-TENANT COMPUTING ENVIRONMENTS
2y 5m to grant Granted Feb 25, 2025
Patent 12229015
Computerized Methods and Apparatus for Data Cloning
2y 5m to grant Granted Feb 18, 2025
Patent 12222904
AVOIDING DATA INCONSISTENCY IN A FILE SYSTEM USING 2-LEVEL SYNCHRONIZATION
2y 5m to grant Granted Feb 11, 2025
Patent 12061306
Constructing Structural Models of the Subsurface
2y 5m to grant Granted Aug 13, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
40%
Grant Probability
59%
With Interview (+19.4%)
4y 11m
Median Time to Grant
High
PTA Risk
Based on 200 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month