Prosecution Insights
Last updated: April 19, 2026
Application No. 18/616,620

Matching Images of Current Inventory with Machine Learning Predictions of User Preferences to Customize User Interface

Final Rejection §101
Filed
Mar 26, 2024
Examiner
ASHRAF, WASEEM
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Maplebear Inc.
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
59%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
130 granted / 260 resolved
-2.0% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
9 currently pending
Career history
269
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
45.4%
+5.4% vs TC avg
§102
20.1%
-19.9% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 260 resolved cases

Office Action

§101
DETAILED ACTION This office action is responsive to applicant’s response filed on 10/20/2025. Claims 1, 11 and 20 are amended; claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 20 as representative: Step 1: The claim recites a system, therefore, is a machine. Step 2A, Prong One: The invention as claimed comprises: A computer system comprising: a processor; and a non-transitory computer-readable storage medium storing instructions that, when executed by the processor, perform actions comprising: retrieving a set of user data for a user of an online system; accessing a machine-learning model trained to predict a measure of preference of the user associated with an item category, wherein the machine-learning model is trained by: receiving user data for a plurality of users of the online system, receiving, for each user of the plurality of users, a label describing the measure of preference of a corresponding user associated with the item category, and training the machine-learning model based at least in part on the user data and the label for each user of the plurality of users; applying the machine-learning model to predict the measure of preference of the user associated with the item category based at least in part on the set of user data for the user; for an item included in the item category, receiving information describing an inventory of the item at a retailer location, the information comprising a set of images of the item captured at the retailer location; receiving a request from a user client device associated with the user to access a user interface comprising information describing a set of items included among the inventory at the retailer location; determining a measure of similarity between the information describing the inventory of the item at the retailer location and the predicted measure of preference of the user associated with the item category; generating an inventory matching score for the item based at least in part on the measure of similarity, wherein the inventory matching score indicates whether the inventory of the item at the retailer location is consistent with the predicted measure of preference of the user associated with the item category; selecting the set of items included among the inventory at the retailer location to include in the user interface based at least in part on the inventory matching score; generating the user interface comprising a set of information describing the selected set of items, wherein generating the user interface further comprises selecting one or more images corresponding to the selected set of items from a plurality of candidate images based on the predicted measure of preference of the user; and sending the user interface to the user client device associated with the user, wherein sending the user interface causes the user client device to display the user interface. Claim as drafted, is a process that under its broadest reasonable interpretation, covers certain methods of organizing human activity such as marketing or sales activities. That is, other than reciting computer, processor, training machine learning model, retailer location, client device, and interface, nothing in the claim element precludes the step from practically being sales activity. It is no different than checking which location have most inventory matching with buyers’ preference. Step 2A Prong Two: The claim recites additional element of “computer, processor, training machine learning model, retailer location, client device, and interface”. The additional elements computer, processor, training machine learning model, retailer location, client device, and interface are no more than mere instructions to apply the exception using a generic computer component (computer). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Using the trained model is nothing more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) In addition, machine learning model can be interpreted as mathematical concept. The process of generating and presenting the interface is applying/linking internet browser technology, that renders the recommended content on users’ device for display. Step 2B: As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component or generally linking the use of the judicial exception to a particular technological environment. The same conclusion is reached in 2B, i.e., mere instructions to apply an exception on a generic computer or generally linking the use of the judicial exception to a particular technological environment cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claims 1, and 11 are rejected based on same rational as claim 1 above, as these claims represent corresponding method, and a computer program product to the system of claim 20. Claims 2-10, and 12-19 further narrow the recited abstract idea above, and are rejected under same rational as claim 1. Claim 5 recites additional element of memory, which is treated in same manner as device/processor in claim 1 above. In addition, “picker client device” as recited in claim 10 is treated same as client device with regard to claim 20. Allowable Subject Matter Claims 1-20 are allowed over the prior art. Liu et al. (US 20230244727 A1) reference teaches the concept of predicting the user preference for the product or product category (Brand) using machine learning and ranking. “ [0054] The machine learning architecture 350 can comprise one or more learning models that are configured to optimize the ranking of items 310 included in the search results 380. In certain embodiments, the one or more learning models can be trained to personalize the ranking of items 310 for each user based, at least in part, on predicted user preferences. In many cases, the one or more learning models can be trained to personalize the search results 380 on an individual user basis (e.g., specifically for each user).” Garner (US 20200273013 A1) teaches inventory level of products of interest at different retail locations. “[0084] ….. Further, one or more rules may be applied that access product preference information for a particular customer, identify products that correspond to the product preference information, and include those products into the listing of products 634. ….. Additionally or alternatively, the applied set of rules result in filtering the products identified in the product database based on one or more of a location of one or more retail stores, rates of sales of one or more of the products at one or more retail stores, inventory levels and/or on-hand inventory of one or more of the products at one or more of the retail stores, other such factors, or a combination of two or more of such factors. For example, one or more rules may cause a confirmation that a particular store has a predefined threshold on-hand quantity, an on-hand quantity based on a predicted quantity a customer is expected to purchase, or other such on-hand quantity prior to incorporating the product into the resulting listing of products. Still further, some embodiments may apply one or more rules that evaluate an on-hand quantity relative to a current and/or predicted rate of sale, and determine whether a threshold quantity will be available at a store at one or more times in the future prior to including the product into the listing. Other rules may apply factors such as a store that a customer is visiting and/or expected to visit, on-hand inventory of products at that store, featured and/or on-sale products at that store, frequency of a sales of products at that store, product demand at that store, and/or other such factors. For example, a product may be excluded from a listing of products 634 when an on-hand quantity is less than a threshold.” As per independent claims 1, 11, and 20, the closest prior art of record taken either individually or in combination with other prior art of record fails to teach or suggest the specific combination of claim limitations/elements presently recited in the claims. While each of the individual features may have been known per se, there is no teaching or suggestion absent applicants’ own disclosure to combine these features in the specific manner claimed other than with impermissible hindsight. Response to Arguments Applicant's arguments filed on 10/20/2025 have been fully considered but they are not persuasive. Applicant argues (Pg. 14): “Under a prong 1 analysis, similar to Example 37, the claimed step of generating the user interface now requires action by a processor that cannot be practically applied in the mind. In particular, this claim element bases image selection based on the predicted measure of preference of the user, which was determined based on historical application usage by the machine learning model to ensure the correct image is surfaced. See e.g., [0002]-[0003] and [0058].” The presented amended language (“wherein generating the user interface further comprises selecting one or more images corresponding to the selected set of items from a plurality of candidate images based on the predicted measure of preference of the user”) generates user interface based on selected image by the selection module. Please note selecting an image corresponding to the selected set of items, and predicted measure of preference is abstract idea itself, as the selection-based preference can fall within mental category, and mathematical category (selecting via machine learning prediction). As stated in the specification, the selection is being made by selectin module. Here, use of selection module to make the selection is merely applying the computer as tool to make the selection, which could have been done in mind or using paper and pen. Instant specification, para 0058 recites: “[0058] The interface module 211 may generate a user interface (e.g., the ordering interface) that includes information describing items included among an inventory at a retailer location. The interface module 211 may do so in response to receiving a request from a user client device 100 associated with a user to access the user interface. The user interface may include a set of information associated with each item selected by the selection module 214…” The case facts in example 37 are very different compare to the instant claim, as example 37, the additional limitation are: “The claim recites the combination of additional elements of 1) receiving, via the GUI, a user selection to organize each icon based on a specific criteria, wherein the specific criteria is an amount of use of each icon; 2) using a processor to perform the determining step; and 3) automatically moving the most used icons to a position on the GUI closest to the start icon of the computer system based on the determined amount of use. The additional elements recite a specific manner of automatically displaying icons to the user based on usage which provides a specific improvement over prior systems, resulting in an improved user interface for electronic devices.” Basically, the display have icons, and amount of usage of these icon is calculated, and then the icons are arranged in display based on user information; this is quite different than merely displaying selected information on the GUI. In instant claim, the calculation are done to merely make selection based on user preference, and then the selection is displayed on GUI; it has nothing to do with calculating the amount of usage of displayed icon and rearranging the icons accordingly. Applicant argues (Pg. 14-15): “Under a Prong 2 analysis, the claim element at issue causes the claim as a whole to be integrated into a practical application. Specifically, the additional elements recite a specific manner of displaying images in a user interface to the user based on usage which provides a specific improvement over prior systems of aligning images for a given item to a user's preferences, resulting in an improved user interface for electronic devices.” As explained above, the applicant is misconstruing the facts f example 37; calculating user preference, and displaying the image based on user’s preference is merely displaying the image on GUI; the selection of image of item based on user’s preference can be done in mind. For example, I can look at set of images of items, and can predict that my kid likes image of picacho more than another Pokémon, and I can select to display image of picacho to be displayed. As written, the claim language is very broad; it does not require performing calculation on what is being displayed, and based on display measure, rearranging the icons. Even, if one doesn’t treat the amended claim language as abstract idea, rather treat it as additional limitation, displaying based on user preference is well known and convention. For example, any e-commerce cite, or movie platforms such as Netflix utilize the predicted user preference and display items based on user’s preference. In case of e-commerce sites such as amazon, it can even suggest and display different items based on user’s tracked history and preferences, thus even if treated as additional limitation, it will fall under well understood, routine and conventional activity as above cited examples are well known a long before the filing date of 03/06/2024. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Waseem Ashraf whose telephone number is (571) 270-3948. The examiner can normally be reached on Monday-Wednesday 7:00 A.M EST to 7:00 P.M EST, and Thursday 7:00 A.M EST to 11:00 A.M EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at hittoy/Awww uspte.qoy/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Tariq Hafiz can be reached on (571) 272-5350. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: httos://patentcenter.uspto.gov. Visit https:/Avww.uspto.gov/patents/apply/patent- center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WASEEM ASHRAF/Supervisory Patent Examiner, Art Unit 3621
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Sep 10, 2025
Non-Final Rejection — §101
Oct 20, 2025
Applicant Interview (Telephonic)
Oct 20, 2025
Response Filed
Oct 21, 2025
Examiner Interview Summary
Jan 24, 2026
Final Rejection — §101
Apr 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524800
SPATIALLY AUGMENTED AUDIO AND XR CONTENT WITHIN AN E-COMMERCE SHOPPING EXPERIENCE
2y 5m to grant Granted Jan 13, 2026
Patent 12190349
SELECTING ADDITIONAL CONTENT FOR INCLUSION IN VIDEO DATA PRESENTED TO USERS VIA AN ONLINE SYSTEM
2y 5m to grant Granted Jan 07, 2025
Patent 11972458
DELIVERING TARGETED ADVERTISING TO MOBILE DEVICES
2y 5m to grant Granted Apr 30, 2024
Patent 11922447
BIOMETRIC-BASED PAYMENT REWARDS
2y 5m to grant Granted Mar 05, 2024
Patent 9648493
NULL
2y 5m to grant Granted May 09, 2017
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
59%
With Interview (+9.3%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 260 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month