Prosecution Insights
Last updated: April 19, 2026
Application No. 19/219,883

Dynamic Selection of Machine-Learning Large Language Models Based on Queries

Non-Final OA §103
Filed
May 27, 2025
Examiner
SHAH, VAISHALI
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Maplebear Inc.
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
128 granted / 224 resolved
+2.1% vs TC avg
Strong +57% interview lift
Without
With
+57.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
27 currently pending
Career history
251
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 224 resolved cases

Office Action

§103
DETAILED ACTION In response to communication filed on 27 May 2025, this is first Office Action of the merits. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-6, 8-9, 11-13, 15-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bender et al. (US 2024/0127797 A1, hereinafter “Bender”) in view of Gangwar et al. (US 12,517,931 B2, hereinafter “Gangwar”). Regarding claim 1, Bender teaches A computer-implemented method, comprising: (see Bender, [0112] “methods described herein, may include or be executed on one or more computer systems”). obtaining a plurality of queries from users and for each query in the plurality of queries, obtaining a respective model deployment selected for the query among a set of model deployments; (see Bender, [0269] “based on a history of previously-used or previously-generated queries… may access a history of queries via a database, where the history may include both the queries and the set ontologies used to generate or update a query… may analyze a history of queries to categorize the n-grams of the queries into nouns, pronouns, verbs, adjectives, or the like… may then determine a count of the n-grams and generate or update a text-generation model based on the count of n-grams used”; [0102] “may use a trained transformer neural network or other machine learning model to determine a set of dialog states values for a query and use the dialog state values in conjunction with n-grams of the query or associated concepts of the n-grams to retrieve a document”; [0372] “wherein different ontologies in the set of ontologies are learned based on different language models among the plurality of language models”). for each query in the plurality of queries, assigning the query to a respective category among a set of categories by applying one or more machine-learning models to information obtained from the query; (see Bender, [0269] “some embodiments may analyze a history of queries to categorize the n-grams of the queries into nouns, pronouns, verbs, may generate a history-specific vocabulary including a first query n-gram or text structure categorized as a "why" query indicating that a query is requesting information on the cause of a subject matter”; [0204] “each respective topic score corresponding to a respective text section and may indicate relevance to a topic, where the topic may be determined from a query or an updated query. For example, a first topic of a query may include the phrase “atrial fibulation” based on the query including the phrase “atrial fibulation,” and a second topic of the query may include the acronym “NVAF” based on a set of cross-graph associations between the n-gram “NVAF” and the n-gram “Atrial Fibulation”). generating a dataset, wherein for each category in the set of categories, the dataset includes a mapping between the category and a respective model deployment for the category identified based on one or more queries assigned to the category, wherein the dataset is stored in a database; (see Bender, [0205] “one or more probabilistic models may be used to score a text section to determine relevance with a document or a query used to retrieve the document… may use a latent Dirichlet allocation (LDA), latent semantic analysis (LSA), or the like…may generate topics for a document based on a LDA model or other probabilistic model and then determine the topic scores of text sections of the document based on the selected topics… may then determine the probability of a text section being relevant to a specified topic based on a frequency of mentioning the topic, where the topic may be mapped to by a query provided by the user via an ontology graph”; [0236] “may train a text summarization model and update the output of the text summarization model with ontologies indicated by a user profile after the text summarization model as provided the sequence of n-gram”; [0048] “may store ontology data in a set of SQL tables of the ontology database 138”). receiving, from a client device, a user query; (see Bender, [0044] “the computer system 110 may use ontology data obtained from the ontology database 138 to retrieve a set of documents from the document database 134 in response to a query provided by the client computing device 104”). assigning the user query to a particular category of the set of categories by applying the one or more machine-learning models to information obtained from the user query; (see Bender, [0102] “may use a trained transformer neural network or other machine learning model to determine a set of dialog states values for a query and use the dialog state values in conjunction with n-grams of the query or associated concepts”; [0101] “may expand the query by determining associated concepts of the query via clusters or other aggregations of learned representations of n-grams of the query in a domain space combining ontology graphs at different hierarchies. For example, some embodiments may receive a query and match an n-gram of the query to a first concept via an embedding vector of the n-gram being part of a cluster of vectors associated with the concept”; [0204] “each respective topic score corresponding to a respective text section and may indicate relevance to a topic, where the topic may be determined from a query or an updated query. For example, a first topic of a query may include the phrase “atrial fibulation” based on the query including the phrase “atrial fibulation,” and a second topic of the query may include the acronym “NVAF” based on a set of cross-graph associations between the n-gram “NVAF” and the n-gram “Atrial Fibulation”). identifying a model deployment mapped to the particular category from the database; (see Bender, [0224] “may retrieve two or more text summarization models for generating a sequence of n-grams based on a user being associated with two different domains or domain class values… may then select which of the text summarization models to use based on a preference weight, where the preference weight may be a… categorical value”; [0277]-[0278] “based on a match between a set of terminology used in a query and a set of terminology of a set of ontologies, a user may be assigned with the domain category values "entomologist" and "expert"… may use the same model to determine the second set of learned representations that was used to determine the first set of learned representations corresponding with the set of computer-generated queries”). … the identified model deployment (see Bender, [0224] “may retrieve two or more text summarization models for generating a sequence of n-grams based on a user being associated with two different domains or domain class values… may then select which of the text summarization models to use based on a preference weight, where the preference weight may be a… categorical value… may then retrieve a second set of neural network parameters for the text summarization model in response to a determination that a second user is associated with a second domain category value”; [0277]-[0278] “based on a match between a set of terminology used in a query and a set of terminology of a set of ontologies, a user may be assigned with the domain category values "entomologist" and "expert"… may use the same model to determine the second set of learned representations that was used to determine the first set of learned representations corresponding with the set of computer-generated queries”). and providing a response obtained from the identified model deployment to the client device as a response to the user query (see Bender, [0102] “may use one or more machine learning models to retrieve documents, summarizations based on documents, or the like as part of providing semantic search results after receiving a query”; [0104] “can train a BERT-based machine learning model to predict answers based on training queries from a stored library of queries and answers, where the answers for the queries may include semantic search results”). Bender does not explicitly teach providing the user query to the identified model deployment for execution; However, Gangwar discloses intent mapping and teaches providing the user query to trained model for execution; (see Gangwar, [col 13 lines 32-35] “Similarly, the ML model can be selected from a plurality of ML models based on the knowledgebase that is to be processed for generating said trained model”; [col 18 lines 24-35] “processing, through a machine learning (ML) model of the system, training data comprising the set of potential queries, the video frame responses corresponding to each of the set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model; at step 606, generating, using the trained model, a prediction engine configured to process an end-user query and predict, from the plurality of intents, an intent associated with the end-user query, and facilitate response to the end-user query based on video frame response that is mapped with the predicted intent”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of providing the user query for execution, assigning query to category, set of model deployments, associating a particular model deployment, estimated outputs, reducing loss function, identify particular model deployment selected, mapping the particular model deployment as selected model deployment and generating training datasets as being disclosed and taught by Gangwar, in the system taught by Bender to yield the predictable results of providing an automated and improved user experience solution (see Gangwar, [col 20 lines 9-14] “provides a unique and inventive solution for facilitating generation of one or more automated visual responses to a user query based on a machine learning based architecture, thus providing an automated and improved user experience solution”). Claims 8 and 15 incorporate substantively all the limitations of claim 1 in a computer-readable medium form (see Bender, [0117] “System memory 520 may include a non-transitory computer readable storage medium that may have program instructions stored thereon that are executable by a computer processor (e.g., one or more of processors 510a-510n) to cause the subject matter and the functional operations described herein”) and system form (see Bender, [0117] “System memory 520 may include a non-transitory computer readable storage medium that may have program instructions stored thereon that are executable by a computer processor (e.g., one or more of processors 510a-510n) to cause the subject matter and the functional operations described herein”) and are rejected under the same rationale. Regarding claim 2, the proposed combination of Bender and Gangwar teaches wherein for each query, assigning the query to the respective category further comprises: (see Bender, [0102] “may use a trained transformer neural network or other machine learning model to determine a set of dialog states values for a query and use the dialog state values in conjunction with n-grams of the query or associated concepts”; [0101] “may expand the query by determining associated concepts of the query via clusters or other aggregations of learned representations of n-grams of the query in a domain space combining ontology graphs at different hierarchies. For example, some embodiments may receive a query and match an n-gram of the query to a first concept via an embedding vector of the n-gram being part of a cluster of vectors associated with the concept”; [0204] “each respective topic score corresponding to a respective text section and may indicate relevance to a topic, where the topic may be determined from a query or an updated query. For example, a first topic of a query may include the phrase “atrial fibulation” based on the query including the phrase “atrial fibulation,” and a second topic of the query may include the acronym “NVAF” based on a set of cross-graph associations between the n-gram “NVAF” and the n-gram “Atrial Fibulation”). applying a machine-learning embedding model to the query to generate a query embedding; (see Bender, [0161] “may use a transformer neural network model… may determine a set of embedding vectors based on the n-grams of the user-provided query using the transformer neural network”). applying the machine-learning embedding model to the set of categories to generate a set of category embeddings; and (see Bender, [0131] “two or more of the n-grams in the box 711, 721, or 731 may be associated with different domains or classes. For example, the n-gram “homo sapien” may be associated with a first embedding vector that is indicated to be part of a first domain labeled “biology” via an associated ontology graph vertex of an ontology graph categorized as being of the first domain. Additionally, the n-gram “clients” may be associated with a second embedding vector that is indicated to be part of a second domain labeled “business” via an associated ontology graph vertex of an ontology graph categorized as being of the second domain”). assigning the query to the respective category (see Gangwar, [col 12 lines 5-7] “wherein each query is associated/mapped with an intent/category/classification that reflects the purpose/intent behind the query”) having a corresponding category embedding (see Bender, [0131] “two or more of the n-grams in the box 711, 721, or 731 may be associated with different domains or classes. For example, the n-gram “homo sapien” may be associated with a first embedding vector that is indicated to be part of a first domain labeled “biology” via an associated ontology graph vertex of an ontology graph categorized as being of the first domain. Additionally, the n-gram “clients” may be associated with a second embedding vector that is indicated to be part of a second domain labeled “business” via an associated ontology graph vertex of an ontology graph categorized as being of the second domain”) below a threshold distance (see Bender, [0077] “may determine that a plurality of pairwise distances between a first vector and a plurality of other vectors is less than a distance threshold”) from the query embedding (see Bender, [0161] “may use a transformer neural network model… may determine a set of embedding vectors based on the n-grams of the user-provided query using the transformer neural network”). The motivation for the proposed combination is maintained. Claims 9 and 16 incorporate substantively all the limitations of claim 2 in a computer-readable medium form and system form respectively and are rejected under the same rationale. Regarding claim 4, the proposed combination of Bender and Gangwar teaches further comprising: obtaining a training dataset including a plurality of training examples, a training example indicating a previous query and a label indicating a known category of the previous query; (see Bender, [0104] “may train a machine learning model based on a set of training queries and a corresponding set of training documents that should be retrieved when the system is provided with the set of training queries”; [0233] “may train and use a plurality of summarization models. In some embodiments, each summarization model of the plurality of summarization models may be labeled with or otherwise associated with different domains of knowledge… may train a respective summarization model by using a respective set of training documents labeled with a respective domain of knowledge as training inputs. After obtaining a query and identifying the respective domain based on a user context parameter, some embodiments may then retrieve the respective summarization model and corresponding model parameters (e.g., neural network parameters, statistical model parameters, or the like) associated with the respective domain”). applying parameters of the one or more machine-learning models to the previous queries of the training examples (see Bender, [0104] “may train a machine learning model based on a set of training queries and a corresponding set of training documents that should be retrieved when the system is provided with the set of training queries”; [0233] “may train and use a plurality of summarization models. In some embodiments, each summarization model of the plurality of summarization models may be labeled with or otherwise associated with different domains of knowledge… may train a respective summarization model by using a respective set of training documents labeled with a respective domain of knowledge as training inputs. After obtaining a query and identifying the respective domain based on a user context parameter, some embodiments may then retrieve the respective summarization model and corresponding model parameters (e.g., neural network parameters, statistical model parameters, or the like) associated with the respective domain”) to generate estimated outputs; (see Gangwar, [col 15 lines 58-61] “wherein the output may be in form of one or more automated visual responses based on prediction by the trained learning module of the ML engine”). generating a loss function (see Bender, [0223] “may use this coverage loss value as part of a loss function”) indicating a difference between (see Bender, [0175] “based on differences between”) the estimated outputs (see Gangwar, [col 15 lines 58-61] “wherein the output may be in form of one or more automated visual responses based on prediction by the trained learning module of the ML engine”) and the known labels; and (see Bender, [0233] “may train and use a plurality of summarization models. In some embodiments, each summarization model of the plurality of summarization models may be labeled with or otherwise associated with different domains of knowledge… may train a respective summarization model by using a respective set of training documents labeled with a respective domain of knowledge as training inputs. After obtaining a query and identifying the respective domain based on a user context parameter, some embodiments may then retrieve the respective summarization model and corresponding model parameters (e.g., neural network parameters, statistical model parameters, or the like) associated with the respective domain”). backpropagating the parameters of the one or more machine-learning models (see Bender, [0257] “use a feed-forward neural network with a backpropagation mechanism to determine… the model parameters of the neural network may be transferred from a previous data source”) to reduce the loss function (see Gangwar, [col 14 lines 20-21] “can extract information during the training to minimize loss function”). The motivation for the proposed combination is maintained. Claims 11 and 18 incorporate substantively all the limitations of claim 4 in a computer-readable medium form and system form respectively and are rejected under the same rationale. Regarding claim 5, the proposed combination of Bender and Gangwar teaches further comprising: for each category in the set of categories, obtaining model deployments associated with the one or more queries assigned to the category; (see Bender, [0205] “one or more probabilistic models may be used to score a text section to determine relevance with a document or a query used to retrieve the document… may use a latent Dirichlet allocation (LDA), latent semantic analysis (LSA), or the like…may generate topics for a document based on a LDA model or other probabilistic model and then determine the topic scores of text sections of the document based on the selected topics… may then determine the probability of a text section being relevant to a specified topic based on a frequency of mentioning the topic, where the topic may be mapped to by a query provided by the user via an ontology graph”; [0236] “may train a text summarization model and update the output of the text summarization model with ontologies indicated by a user profile after the text summarization model as provided the sequence of n-gram”; [0048] “may store ontology data in a set of SQL tables of the ontology database 138”). identifying a particular model deployment selected to process (see Gangwar, [col 8 lines 57-66] “The system may further include a machine learning (ML) engine (216) that can be configured to process, through an appropriately selected machine learning (ML) model of the system, training data comprising the set of potential queries, the video frame responses corresponding to each of said set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model. The trained model can then be used to generate a prediction engine (214) configured to process an end-user query”) for a threshold number of (see Bender, [0389] “satisfying the threshold”) the one or more queries; and (see Gangwar, [col 8 lines 57-66] “The system may further include a machine learning (ML) engine (216) that can be configured to process, through an appropriately selected machine learning (ML) model of the system, training data comprising the set of potential queries, the video frame responses corresponding to each of said set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model. The trained model can then be used to generate a prediction engine (214) configured to process an end-user query”). mapping the particular model deployment as the selected model deployment for the category (see Gangwar, [col 9 lines 14-17] “can enable generation of a plurality of datasets, wherein each dataset may include one or more pre-defined visual responses to a pre-defined/potential query”; [col 15 lines 35-41] “The dataset may contain expressions and their relevant categories or classes called Intents, wherein based on such a dataset (list of intents and expressions) created for the knowledge base, an algorithm may be selected and the learning module may be trained with the algorithm using the knowledge base/dataset may be trained”; [col Gangwar, [col 13 line 66 – col 14 line 2] “wherein the processing may include extraction of one or more attributes associated with each potential query and each corresponding dataset/video frame response to train the trained model (218)”). The motivation for the proposed combination is maintained. Claims 12 and 19 incorporate substantively all the limitations of claim 5 in a computer-readable medium form and system form respectively and are rejected under the same rationale. Regarding claim 6, the proposed combination of Bender and Gangwar teaches further comprising: receiving an indication of positive feedback from a user associated with the user query; (see Bender, [0226] “may receive a feedback message indicating that a summary is accurate and, in response, some embodiments may increase a preference weight associated with the set of ontologies used to generate the summary”). generating a training dataset including the user query and the particular category assigned to the user query; and (see Gangwar, [col 12 lines 43-46] “training data comprising the set of potential queries, the video frame responses corresponding to each of the set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model (218)”). fine-tuning parameters of the one or more machine-learning models (see Bender, [0186] “After generating or updating the learning model 1210, some embodiments may perform a set of fine-tuning operations represented by the fine tune training function 1220… where the fine tune training function 1220 may be limited to updating a subset of the parameters of the learning model 1210”) based on the training dataset (see Gangwar, [col 12 lines 43-46] “training data comprising the set of potential queries, the video frame responses corresponding to each of the set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model (218)”). The motivation for the proposed combination is maintained. Claims 13 and 20 incorporate substantively all the limitations of claim 6 in a computer-readable medium form and system form respectively and are rejected under the same rationale. Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Bender in view of Gangwar further in view of Kalache et al. (US 11,895,356 B1, hereinafter “Kalache”). Regarding claim 3, the proposed combination of Bender and Gangwar teaches further comprising: for a query in the plurality of queries, obtaining a set of responses (see Bender, [0146] “The account may also store or include links to a history of retrieved documents, feedback messages or indicators from the user indicating the relevance of documents, a set of previously-entered queries, age, ethnicity, geographic location, or the like. As described further below, some embodiments may use account parameters to determine the relevance of a set of retrieved documents or a set of expanded queries generated from an initial query”) from the set of model deployments generated by executing the query; (see Gangwar, [col 13 lines 32-35] “Similarly, the ML model can be selected from a plurality of ML models based on the knowledgebase that is to be processed for generating said trained model”; [col 18 lines 24-35] “processing, through a machine learning (ML) model of the system, training data comprising the set of potential queries, the video frame responses corresponding to each of the set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model; at step 606, generating, using the trained model, a prediction engine configured to process an end-user query and predict, from the plurality of intents, an intent associated with the end-user query, and facilitate response to the end-user query based on video frame response that is mapped with the predicted intent”). … to cause display of the set of responses and a request to select a preferred response to the query; (see Bender, [0226] “the UI may include a set of UI elements that, when interacted with by a user, may indicate a feedback message provided by the user. The feedback message may be used to adjust a preference weight associated with an ontology graph. By adjusting the preference weights, some embodiments may modify the degree to which a specific ontology is used when generating a summary”). obtaining a selection of the preferred response; and (see Bender, [0226] “may receive a feedback message indicating that a summary is accurate and, in response, some embodiments may increase a preference weight associated with the set of ontologies used to generate the summary”). associating a particular model deployment (see Gangwar, [col 11 line 63 – col 12 line 7] “The processing engine (208) may include one or more engines selected from any of a bot engine (212), a prediction engine (214), a machine learning (ML) engine (216), learning module (218), and other engines (220)… one or more potential queries that the entity (attempting to make the bot) is likely to be asked along with video frame responses to each of the one or more potential queries, wherein each query is associated/mapped with an intent/ category/classification that reflects the purpose/intent behind the query”) that generated the preferred response (see Bender, [0226] “may receive a feedback message indicating that a summary is accurate and, in response, some embodiments may increase a preference weight associated with the set of ontologies used to generate the summary”) as the selected model deployment for the query (see Bender, [0269] “based on a history of previously-used or previously-generated queries… may access a history of queries via a database, where the history may include both the queries and the set ontologies used to generate or update a query… may analyze a history of queries to categorize the n-grams of the queries into nouns, pronouns, verbs, adjectives, or the like… may then determine a count of the n-grams and generate or update a text-generation model based on the count of n-grams used”). The proposed combination of Bender and Gangwar does not explicitly teach transmitting instructions to another client device. However, Kalache discloses transmitting request to second client and teaches transmitting instructions to another client device (see Kalache, [col 6 lines 16-18] “transmits a… request 224-2 to the second client device 202-2”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of transmitting information to another client device as being disclosed and taught by Kalache, in the system taught by the proposed combination of Bender and Gangwar to yield the predictable results of improving social sharing of information (see Kalache, [col 3 lines 19-22] “The present disclosure relates generally to systems and methods for improving social sharing of video information produced by a game application or other interactive software application”). Claims 10 and 17 incorporate substantively all the limitations of claim 3 in a computer-readable medium form and system form respectively and are rejected under the same rationale. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bender in view of Gangwar further in view of Chakraborty et al. (US 2014/0270482, hereinafter “Chakraborty”). Regarding claim 7, the proposed combination of Bender and Gangwar teaches iteratively perform steps… (see Bender, [0077] “then iteratively perform these steps”). updating the dataset to incorporate updated mappings for… (see Bender, [0040] “may update an ontology graph based on the embedding vectors or other learned representations”; [0166] “may include obtaining an update for the set of ontology graphs”; [0308] “An ontology update request may include a request to update an n-gram mapped to the ontology, remove an n-gram from an ontology, or add an n-gram to an ontology. For example, an ontology update request may include a first n-gram, a domain category value, and a function argument indicating that the first n-gram should be removed from the ontology graph(s) categorized with the domain category value”) and the set of model deployments (see Bender, [0295] “may update a weight, bias, or other model parameter associated with an n-gram mapped to a vertex of an ontology graph”) The proposed combination of Bender and Gangwar does not explicitly teach refining the set of categories to generate a refined set of categories with a higher degree of granularity; and the refined set of categories. However, Chakraborty discloses classification and teaches refining the set of categories to generate a refined set of categories with a higher degree of granularity; and… the refined set of categories (see Chakraborty, [0088] “further refine or clarify the image classification to a finer degree of granularity”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of refining categories as being disclosed and taught by Chakraborty, in the system taught by the proposed combination of Bender and Gangwar to yield the predictable results of improving the ability of entity interaction recognition system (see Chakraborty, [0050] “To improve the ability of the entity interaction recognition system 112 to appropriately classify such an image, the localized pose constraint assumes that the people who are physically close to each other also share the same pose.”). Claims 14 incorporates substantively all the limitations of claim 7 in a computer-readable medium form and is rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISHALI SHAH whose telephone number is (571)272-8532. The examiner can normally be reached Monday - Friday (7:30 AM to 4:00 PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AJAY BHATIA can be reached at (571)272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAISHALI SHAH/Primary Examiner, Art Unit 2156
Read full office action

Prosecution Timeline

May 27, 2025
Application Filed
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596730
SYSTEM TO ASSIST USERS OF A SOFTWARE APPLICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12585682
METHOD AND SYSTEM FOR GENERATING LONGFORM TECHNICAL QUESTION AND ANSWER DATASET
2y 5m to grant Granted Mar 24, 2026
Patent 12579193
SELF-DISCOVERY AND CONSTRUCTION OF TYPE-SENSITIVE COLUMNAR FORMATS ON TYPE-AGNOSTIC STORAGE SERVERS TO ACCELERATE OFFLOADED QUERIES
2y 5m to grant Granted Mar 17, 2026
Patent 12579199
SYSTEMS AND METHODS FOR TRACKING DOCUMENT REUSE AND AUTOMATICALLY UPDATING DOCUMENT FRAGMENTS ACROSS ONE OR MORE PLATFORMS
2y 5m to grant Granted Mar 17, 2026
Patent 12572604
VEHICLE DATA COLLECTION SYSTEM AND METHOD INCLUDING RELIABILITY INFORMATION FOR A STORAGE UNIT FOR STORING PARTIAL LOG DATA
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+57.0%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 224 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month