Prosecution Insights
Last updated: April 19, 2026
Application No. 19/208,438

CONTEXTUAL BANDIT MODEL FOR QUERY PROCESSING MODEL SELECTION

Non-Final OA §102§DP
Filed
May 14, 2025
Examiner
HARPER, ELIYAH STONE
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Maplebear Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
4y 2m
To Grant
85%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
559 granted / 764 resolved
+18.2% vs TC avg
Moderate +12% lift
Without
With
+11.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
17 currently pending
Career history
781
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
2.7%
-37.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 764 resolved cases

Office Action

§102 §DP
DETAILED ACTION 1. This office action is in response to application 19/208,438 filed on 5/14/2025. Claims 1-20 are pending in this office action. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting 3. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4 and 6-10 of U.S. Patent No.US 12,339,915. In view of US 11,004,135 (hereinafter Sandler) Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations in bold are the same and the differences would have been obvious to an artisan of ordinary skill in the art. For instance US 12,339,915 recites a predicted reward whereas the instant application recites a predicted likelihood. Sandler however does disclose a predicted likelihood (See column 8 line 45- column 9 line 15 note the system selects from multiple models using learning or a bandit model based on the likelihood of selection of the items of interest). It would have been obvious to an artisan of ordinary skill in the pertinent at the time the instantly claimed invention was filed to have incorporated the teaching of Sandler into the system of US 12,339,915. The modification would have been obvious because the two references are concerned with the solution to problem of predicting user behavior , therefore there is an implicit motivation to combine these references (i.e. motivation from the references themselves). In other words, the ordinary skilled artisan, during his/her quest for a solution to the cited problem, would look to the cited references at the time the invention was made. Consequently, the ordinary skilled artisan would have been motivated to combine the cited references since Sandler’s teaching would enable users of the US 12,339,915 system to have more efficient processing. As for claim 10 Sandler discloses: wherein obtaining the one or more contextual features describing the context of the user query comprises: accessing user features from a user profile associated with a user associated with the client device (See column 6 lines 20-55 and column 7 lines 50-65 note user profiles are associated with historical or behavioral or current context and used as input). It would have been obvious to an artisan of ordinary skill in the pertinent at the time the instantly claimed invention was filed to have incorporated the teaching of Sandler into the system of US 12,339,915. The modification would have been obvious because the two references are concerned with the solution to problem of predicting user behavior , therefore there is an implicit motivation to combine these references (i.e. motivation from the references themselves). In other words, the ordinary skilled artisan, during his/her quest for a solution to the cited problem, would look to the cited references at the time the invention was made. Consequently, the ordinary skilled artisan would have been motivated to combine the cited references since Sandler’s teaching would enable users of the US 12,339,915 system to have more efficient processing. As for claim 11 Sandler discloses: wherein applying the contextual bandit model to the query features and the contextual features comprises applying the contextual bandit model to select two or more query processing models, the method further comprising: applying each selected query processing model to the user query and the contextual features to identify a set of query results (See column 8 line 45- column 9 line 15 note the system selects from multiple models using learning or a bandit model); and displaying an aggregation of query results from the sets of query results output by the two or more query processing models (See column 9 lines 4-15 note an aggregate is displayed). It would have been obvious to an artisan of ordinary skill in the pertinent at the time the instantly claimed invention was filed to have incorporated the teaching of Sandler into the system of US 12,339,915. The modification would have been obvious because the two references are concerned with the solution to problem of predicting user behavior , therefore there is an implicit motivation to combine these references (i.e. motivation from the references themselves). In other words, the ordinary skilled artisan, during his/her quest for a solution to the cited problem, would look to the cited references at the time the invention was made. Consequently, the ordinary skilled artisan would have been motivated to combine the cited references since Sandler’s teaching would enable users of the US 12,339,915 system to have more efficient processing. 19/208,438 US 12,339,915 1 .A computer-implemented method comprising: receiving, from a client device, a user query for identifying one or more items by an online system, the user query described by one or more query features; obtaining one or more contextual features describing a context of the user query; applying a contextual bandit model to the query features and the contextual features to select a query processing model from a plurality of query processing models, wherein applying the contextual bandit model comprises: outputting, for each query processing model, a predicted likelihood that the user will interact with the query results identified by the query processing model; and selecting the query processing model from the plurality of query processing models based on the predicted likelihoods; applying the selected query processing model to the user query to obtain query results; and transmitting the query results for display on the client device. 1. A computer-implemented method comprising: receiving, from a client device, a user query for identifying one or more items by an online system, the user query described by one or more query features; obtaining one or more contextual features describing a context of the user query, wherein the one or more contextual features comprises: user features describing a user associated with the client device; retailer features describing one or more retailers hosted by the online system; and item features describing one or more items listed on the online system; and applying a contextual bandit model to the query features and the contextual features to select a query processing model from a plurality of query processing models wherein applying the contextual bandit model, further comprises: outputting, for each query processing model, a predicted reward to the online system for query results identified by the query processing model; and selecting the query processing model from the plurality of query processing models based on the predicted rewards; and applying the selected query processing model to the user query to obtain query results; and transmitting the query results for display on the client device. 2. The computer-implemented method of claim 1, wherein receiving the user query comprises receiving: text, audio signals, or visual signals. 2. The computer-implemented method of claim 1, wherein receiving the user query comprises receiving one or more of: text, audio signals, or visual signals. 3. The computer-implemented method of claim 2, further comprising: extracting the one or more query features from the user query with: a natural language processing model, a speech recognition model, or an image recognition model. 3. The computer-implemented method of claim 2, further comprising: extracting the one or more query features from the user query with one or more of: a natural language processing model, a speech recognition model, or an image recognition model. 4. The computer-implemented method of claim 1, wherein the query processing models are disparately trained. 4. The computer-implemented method of claim 1, wherein the query processing models are disparately trained. 5. The computer-implemented method of claim 1, wherein applying the selected query processing model comprises: applying the selected query processing model to the user query and the contextual features to obtain the query results. 6. The computer-implemented method of claim 1, wherein applying the selected query processing model comprises: applying the selected query processing model to the user query and the contextual features to obtain the query results. 6. The computer-implemented method of claim 1, further comprising: ranking the query results based on relevance to the user query, wherein displaying the query results is based on the ranking. 7. The computer-implemented method of claim 1, further comprising: ranking the query results based on relevance to the user query, wherein displaying the query results is based on the ranking. 7. The computer-implemented method of claim 1, further comprising: receiving, from the client device, a user selection interacting with an item from the query results; scoring the query processing model based on the user interaction; and retraining the contextual bandit model based on the score for the query processing model. 8. The computer-implemented method of claim 1, further comprising: receiving, from the client device, a user selection interacting with an item from the query results. 8. The computer-implemented method of claim 7, wherein the user selection comprises at least one of: viewing an item from the query results; adding the item to a shopping cart; favoriting the item; or ordering the item. 9. The computer-implemented method of claim 8, wherein the user selection comprises at least one of: viewing an item from the query results; adding the item to a shopping cart; favoriting the item; or ordering the item. 9. The computer-implemented method of claim 7, further comprising: scoring a reward based on the user selection, wherein retraining the contextual bandit model comprises retraining further based on the reward. 10. The computer-implemented method of claim 8, further comprising: scoring a reward based on the user selection; and training the contextual bandit model based on the reward. Claim Rejections - 35 USC § 102 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 11,004,135 (Sandler). As for claim 1 Sandler discloses: 1. A computer-implemented method comprising: receiving, from a client device, a user query for identifying one or more items by an online system, the user query described by one or more query features (See figure 4 note “enter search” this query is received and then recommendations are made based on context); obtaining one or more contextual features describing a context of the user query (See column 6 lines 30-45 note the context of the model selection is based on the context of user query history); applying a contextual bandit model to the query features and the contextual features to select a query processing model from a plurality of query processing models (See column 8 line 45- column 9 line 15 note the system selects from multiple models using learning or a bandit model); wherein applying the contextual bandit model comprises: outputting, for each query processing model, a predicted likelihood that the user will interact with the query results identified by the query processing model (See column 5 line 60 – column 6 line 18 note the system predicts items of interest that the user should interact with based on data and places those items in various outputs); and selecting the query processing model from the plurality of query processing models based on the predicted likelihoods (See column 8 line 45- column 9 line 15 note the system selects from multiple models using learning or a bandit model based on the likelihood of selection of the items of interest); applying the selected query processing model to the user query to obtain query results (See column 10 line 40 -column 11 line 5 note the results are the recommendations the system determines based on the user initially looking for items moreover see figure 4 with the search prompt); and transmitting the query results for display on the client device (See column 9 lines 15-40 note while not illustrated when a user enters a query a trigger tells the system to present the trained recommendations within a user specific generated interface). As for claim 2 the rejection of claim 1 is incorporated and further Sandler discloses: wherein receiving the user query comprises receiving: text, audio signals, or visual signals (See figure 4 the user inputs a query via text). As for claim 3 the rejection of claim 2 is incorporated and further Sandler discloses: extracting the one or more query features from the user query with: a natural language processing model, a speech recognition model, or an image recognition model (See column 7 lines 40-51 note the system discloses using natural language but note the system can also process images and audio waveform). As for claim 4 the rejection of claim 1 is incorporated and further Sandler discloses: wherein the query processing models are disparately trained (See column 6 line 55- column 7 line 20 note the models can be separately trained). As for claim 5 the rejection of claim 1 is incorporated and further Sandler discloses: wherein applying the selected query processing model comprises: applying the selected query processing model to the user query and the contextual features to obtain the query results (See column 10 line 40 -column 11 line 5 note the results are the recommendations the system determines based on the user initially looking for items). As for claim 6 the rejection of claim 1 is incorporated and further Sandler discloses: ranking the query results based on relevance to the user query, wherein displaying the query results is based on the ranking (See column 8 lines 30-40 note the items are ranking after being selected). As for claim 7 the rejection of claim 1 is incorporated and further Sandler discloses: receiving, from the client device, a user selection interacting with an item from the query results (See column 5 lines 50-60) scoring the querying processing model based on the user interaction and retraining the contextual bandit model based on the score for the query processing model (See column 13 lines 10-30 note the system uses frequency which adds a count/reward for every occurrence and uses the count to predict and then finally display results note the system also trains based on frequency). As for claim 8 the rejection of claim 7 is incorporated and further Sandler discloses: wherein the user selection comprises at least one of: viewing an item from the query results; adding the item to a shopping cart; favoriting the item; or ordering the item (See figure 1C wherein an item is added to the shopping cart). As for claim 9 the rejection of claim 7 is incorporated and further Sandler discloses: scoring a reward based on the user selection; wherein retraining the contextual bandit model comprises retraining further based on the reward (See column 13 lines 10-30 note the system uses frequency which adds a count/reward for every occurrence and uses the count to predict and then finally display results note the system also trains based on frequency). As for claim 10 the rejection of claim 1 is incorporated and further Sandler discloses: wherein obtaining the one or more contextual features describing the context of the user query comprises: accessing user features from a user profile associated with a user associated with the client device (See column 6 lines 20-55 and column 7 lines 50-65 note user profiles are associated with historical or behavioral or current context and used as input). As for claim 11 the rejection of claim 1 is incorporated and further Sandler discloses: wherein applying the contextual bandit model to the query features and the contextual features comprises applying the contextual bandit model to select two or more query processing models, the method further comprising: applying each selected query processing model to the user query and the contextual features to identify a set of query results (See column 8 line 45- column 9 line 15 note the system selects from multiple models using learning or a bandit model); and displaying an aggregation of query results from the sets of query results output by the two or more query processing models (See column 9 lines 4-15 note an aggregate is displayed). Claims 12-20 are non-transitory computer readable medium claims substantially corresponding to the method of claims 1, 3-7 and 9-11 and are thus rejected for the same reasons as set forth in the rejection of claims 1, 3-7 and 9-11. Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIYAH STONE HARPER whose telephone number is (571)272-0759. The examiner can normally be reached Monday-Friday 10:00 am - 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mark Featherstone can be reached on (571)270-3750. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Eliyah S. Harper/Primary Examiner, Art Unit 2166 February 5, 2026
Read full office action

Prosecution Timeline

May 14, 2025
Application Filed
Feb 05, 2026
Non-Final Rejection — §102, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596694
Collaborative Automated System for Intelligent Storage Forecasting and Abend Handling
2y 5m to grant Granted Apr 07, 2026
Patent 12585648
EFFICIENT QUERY EXECUTION FOR ONTOLOGY-BASED DATABASES
2y 5m to grant Granted Mar 24, 2026
Patent 12579119
SELECTIVE SPOOL DATA STORAGE IN AN OBJECT STORE OR LOCAL DATABASE STORAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12579131
TRANSACTIONALLY CONSISTENT HNSW INDEX
2y 5m to grant Granted Mar 17, 2026
Patent 12581019
SYSTEMS AND METHODS FOR QUERYING DATABASES OF CLAIMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
85%
With Interview (+11.6%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 764 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month