Prosecution Insights
Last updated: April 19, 2026
Application No. 19/184,751

ITEM KNOWLEDGE GRAPH WITH LARGE LANGUAGE MODELS

Non-Final OA §101§103
Filed
Apr 21, 2025
Examiner
TO, BAOQUOC N
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Doordash Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
854 granted / 950 resolved
+34.9% vs TC avg
Moderate +8% lift
Without
With
+8.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
29 currently pending
Career history
979
Total Applications
across all art units

Statute-Specific Performance

§101
25.3%
-14.7% vs TC avg
§103
28.0%
-12.0% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 950 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continuity/reexam data Parent data 19184751 filed 04/21/2025 Claims Priority from Provisional Application 63637274 , filed 04/22/2024 Child data None Foreign data No foreign data information (*) - Request to retrieve electronic copy of foreign priority from participating receiving offices. 1. Claims presented for examination: 1-20 Information Disclosure Statement 2. The information disclosure statement (IDS) submitted on 04/21/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings 3. Drawing filed on 04/21/2025 is accepted by examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. Step 1 (See MPEP 2106) Claims 1-20 are directed to a method, a computer and a system which belongs to a statutory class. Step 2A, Prong One: Claims 1, 14 and 19 recites “determining, by a computer, an output extraction data form the item description using a first language model, wherein the output extraction data includes item characteristic data” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A, Prong Two: Claims 1, 14 and 19, recite “a computer”, “processor”, computer-readable medium including instructions which is a high-level recitation of a generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. “receiving, by a computer, an item description” is insignificant extra-solution activity. This limitation recited as retrieval/receiving of data (i.e. mere data gathering) or at most selecting a particular data source (i.e. the record) to be manipulate (i.e. by a transaction). This does not provide integration into a practical application. “storing, by the computer, the output extraction data in a database” is insignificant extra-solution activity. This saving the data in the storage. This does not provide integration into a practical application. Step 2B: Looking at the claim as a whole does not change this conclusion and the claims are ineligible. As to claim 2, “determining, by the computer, two or more classifications and two or more confidence levels for the item description using a machine learning classification model; and determining, by the computer, that the two or more confidence levels are below a predetermined confidence threshold” which is are processes that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 3, “training, by the computer, the machine learning classification model using the item characteristic data and the item description” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 4, the limitation “determining, by the computer, whether or not the item characteristic data matches previously stored item characteristic data in the database” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 5, the limitation “determining, by the computer, whether or not the item characteristic data matches the previously stored item characteristic data in the database using a second large language model” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 6, the limitation “obtaining, by the computer, additional data related to the item description or related to an item associated with the item description” is the insignificantly extra solution activity. As to claim 7, the limitation “determining, by the computer, the output extraction data from the item description and the additional data using the first large language model, wherein the additional data is an item category” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 8, the limitation “the item characteristic data comprises dietary restriction data, brand data, alcohol characteristic data, or size data” is only defining what characteristic data is and insignificantly to amount significantly more. As to claim 9, the limitation “augmenting, by the computer, the item description with item details” is in significant extra solution activity. As to claim 10, the limitation “the item details are for an item associated with the item description, wherein the method further comprises: performing, by the computer, one or more search engine queries for information related to the item; and generating, by the computer, the item details using results from the one or more search engine queries” are extra solution activities which provide data retrieval of information. As to claim 11, the limitation “the item details include ingredients that are in an item associated with the item description” is only defining what item details are and significantly to amount significantly more. As to claim 12, the limitation “determining, by the computer, a prompt based on the item description; determining, by the computer, a plurality of artificial neural network generated prompts using an artificial neural network based on the prompt; and creating, by the computer, an overall prompt comprising the prompt and the plurality of artificial neural network generated prompts, wherein determining the output extraction data comprises: determining, by the computer the output extraction data based on the overall prompt” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 13, the limitation “training, by the computer, the artificial neural network using the overall prompt and the output extraction data” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 15, the limitation “collecting a set of item characteristic data and item description pairs from the database; creating a first training set comprising the set of item characteristic data and item description pairs, for a first training stage; and training a machine learning classification model using the first training set, and wherein the method further comprises, before determining the output extraction data, determining two or more classifications and confidence levels for the two or more item descriptions using the machine learning classification model; and determining that the confidence levels are below a predetermined confidence threshold” which are processes that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 16, the limitation “creating a second training set comprising the item characteristic data and the item description, for a second training stage; and training the machine learning classification model using the second training set” which are processes that, under its broadest reasonable interpretation, covers performance of the limitation by Mental Process, but for the recitation of generic computer components. Nothing in the claim element precludes the steps from practically being performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mental process, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As to claim 17, the limitation “a large language model module configured to train, maintain, and/or utilize the first large language model; a classification model module configured to train, maintain, and/or utilize a machine learning classification model; and a database module configured to communicate with the database” are the computer algorithm to perform mental process. As to claim 18, the limitation “the item description is received from a service provider computer, wherein the computer is a central server computer that facilitates in fulfillment of fulfillment requests received from end user devices that request items from a service provider associated with the service provider computer” are generic computer components which operate to perform a mental process. As to claim 20, the claim retied “updating a delivery application to include the item description and the item characteristic data as an item provided by the service provider computer to end user” is additional element which is insignificant to amount significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 5. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (Pub. No. US 2026/0004336 A1). As to claim 1, Wu discloses a method comprising: receiving, by a computer, an item description (text specification of the item) (paragraph 0062); determining, by the computer, output extraction data from the item description using a first large language model (the attribute extraction module 225 then receives a set of outputs from each LLM of the ensemble of LLMs. The output from the ensemble determine whether the size information is present in the image) (paragraph 0063), and storing, by the computer, the output extraction data in a database. Wu does not explicitly disclose wherein the output extraction data includes item characteristic data. However, Wu discloses as an example, FIG. 3 includes an image of an item with a size and count attribute description 307 for a water bottle pack 305 item. The size and count information is “24 pack, 16.9 fl oz” to describe the count of the item and the size of the item. This suggests the 16.9 fl. Oz is the characteristic of the extraction data. Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the instant application to include the 16.9 fl. Oz is the characteristic of the extraction data in order to provide information to the user. As to claim 2 Wu discloses the method of claim 1 excepting for before determining the output extraction data from the item description using the first large language model: determining (LLM 1) (paragraph 0065(, by the computer, two or more classifications (classification) (paragraph 0059) and two or more confidence levels for the item description using a machine learning classification model (machine learning model) (paragraph 0077); and determining, by the computer, that the two or more confidence levels are below a predetermined confidence threshold (lower score for learning well) (paragraph 0077). As to claim 3, discloses the method of claim 2, further comprising, after storing the output extraction data: training, by the computer, the machine learning classification model (machine learning model) using the item characteristic data and the item description (to train a machine learning model based on training example, the machine learning training module 230 applies the machine learning model to the input data in the training examples to generate output) (paragraph 0077). As to claim 4, discloses the method of claim 1, further comprising: determining, by the computer, whether or not the item characteristic data matches previously stored item characteristic data in the database (responsive to determining 326 that a threshold number of outputs have a matching values of the size or count information that is present in the image, the attribute extraction module 225 updates the item attribute data with the matching values of size information of the item) (paragraph 0067) (the check out size was check and update the count of stored size). As to claim 5, Wu discloses the method of claim 4, wherein further determining whether or not the item characteristic data matches the previously stored item characteristic data in the database (responsive to determining 326 that a threshold number of outputs have a matching values of the size or count information that is present in the image, the attribute extraction module 225 updates the item attribute data with the matching values of size information of the item) (paragraph 0067) (the check out size was check and update the count of stored size) further comprises: determining, by the computer, whether or not the item characteristic data matches the previously stored item characteristic data in the database (responsive to determining 326 that a threshold number of outputs have a matching values of the size or count information that is present in the image, the attribute extraction module 225 updates the item attribute data with the matching values of size information of the item) (paragraph 0067) (the check out size was check and update the count of stored size) using a second large language model (a similar prompt can be provided to LLM 2 314) (paragraph 0065). As to claim 6, Wu discloses the method of claim 1, further comprising: obtaining, by the computer, additional data related to the item description or related to an item associated with the item description (given the image, extract the information of the item’s size and quantity) (paragraph 0065) (quantity is the additional data related image). As to claim 7, Wu discloses the method of claim 6, wherein determining the output extraction data further comprises: determining, by the computer, the output extraction data from the item description and the additional data using the first large language model, wherein the additional data is an item category (extract the information of the item’s size and quantity.” A Similar prompt can be provided to LLM 2 314) (paragraph 0065). As to claim 8, Wu discloses the method of claim 1, wherein the item characteristic data comprises dietary restriction data, brand data, alcohol characteristic data, or size data (size information) (paragraph 0065). As to claim 9, Wu discloses the method of claim 1 further comprising: augmenting, by the computer, the item description with item details (…adding or removing item, or adding instructions for item…) (paragraph 0012). As to claim 10, Wu discloses the method of claim 9, wherein the item details are for an item associated with the item description, wherein the method further comprises: performing, by the computer, one or more search engine queries for information related to the item; and generating, by the computer, the item details using results from the one or more search engine queries (searching for information related to content) (paragraph 0049). As to claim 11, Wu discloses the method of claim 9, wherein the item details include ingredients that are in an item associated with the item description (size information) (paragraph 0065). As to clam 12, Wu discloses the method of claim 1 further comprising: determining, by the computer, a prompt based on the item description (the prompt include an image of an item) (paragraph 0065); determining, by the computer, a plurality of artificial neural network generated prompts using an artificial neural network based on the prompt (…the attribute extraction module 225 prompts the ensembles LLMs with a second set of prompts…) (paragraph 0065); and creating, by the computer, an overall prompt comprising the prompt and the plurality of artificial neural network generated prompts, wherein determining the output extraction data comprises: determining, by the computer the output extraction data based on the overall prompt (given the image, extract the information of the item’s size and quantity.’ A similar prompt can be provided to LLM 2 314) (paragraph 0065). As to claim 13, Wu discloses the method of claim 12 further comprising: training, by the computer, the artificial neural network using the overall prompt and the output extraction data (neural network) (paragraph 0029) Claim 14 is rejected under the same reason as to claim 1, Wu discloses a computer comprising: a processor (processor) (paragraph 0082); and a computer-readable medium (computer readable media) (paragraph 0082) coupled to the processor (processor) (paragraph 0082), the computer-readable medium comprising code executable (instructions) by the processor (processor) (paragraph 0082)for implementing a method. As to claim 15, Wu discloses the computer of claim 14, wherein the method further comprises: collecting a set of item characteristic data and item description pairs from the database (to train a machine learning model based on training example, the machine learning training module 230 applies the machine learning model to the input data in the training examples to generate output) (paragraph 0077); creating a first training set comprising the set of item characteristic data and item description pairs, for a first training stage (to train a machine learning model based on training example, the machine learning training module 230 applies the machine learning model to the input data in the training examples to generate output) (paragraph 0077); and training a machine learning classification model using the first training set, and wherein the method further comprises, before determining the output extraction data, determining two or more classifications and confidence levels for the two or more item descriptions using the machine learning classification model (to train a machine learning model based on training example, the machine learning training module 230 applies the machine learning model to the input data in the training examples to generate output) (paragraph 0077); and determining that the confidence levels are below a predetermined confidence threshold (lower score for learning well) (paragraph 0077). As to claim 16, Wu discloses the computer of claim 15, wherein the method further comprises: creating a second training set comprising the item characteristic data and the item description, for a second training stage (to train a machine learning model based on training example, the machine learning training module 230 applies the machine learning model to the input data in the training examples to generate output) (paragraph 0077); and training the machine learning classification model using the second training set (to train a machine learning model based on training example, the machine learning training module 230 applies the machine learning model to the input data in the training examples to generate output) (paragraph 0077). As to claim 17, Wu discloses the computer of claim 14 further comprising: a large language model module configured to train, maintain, and/or utilize the first large language model (LLMs) (paragraph 0065); a classification model module configured to train, maintain (the machine learning training module 230 may apply an iterative process to train a machine learning) (paragraph 0077), and/or utilize a machine learning classification model; and a database module configured to communicate with the database (the data collection module 200 also collects item data, which is information or data that identifies and describes items that are available at a retailer location) (paragraph 0043). As to claim 18, Wu discloses the computer of claim 14, Wu discloses wherein the item description is received from a service provider computer (online system 140) (paragraph 0023), wherein the computer is a central server computer (retailer computing system 120) (paragraph 0023) that facilitates in fulfillment of fulfillment requests received from end user devices (user device 100) (paragraph 0011) that request items from a service provider associated with the service provider computer (online system 140) (paragraph 0023) Claim 19 is rejected under the same reason as to claim 18, Wu discloses a system comprising: a service provider computer (online system 140) (paragraph 0023) in operative communication with a central server computer (retailer computing system 120) (paragraph 0023); and the central server computer (online system 140) (paragraph 0023) comprising: a processor; and a computer-readable medium coupled to the processor, the computer-readable medium (the one or more computer-readable media) (paragraph 0082) comprising code (instructions) (paragraph 0082) executable by the processor for implementing a method. Claim 20 is rejected under the same reason as to claim 19, Wu discloses a system of claim 19, wherein the method further comprises: updating a delivery application to include the item description and the item characteristic data as an item provided by the service provider computer to end users (…the online system updates the item attribute data with the matching size values of size information of the item) (paragraph 0002). Conclusion 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BAOQUOC N TO whose telephone number is (571)272-4041. The examiner can normally be reached Mon-Fri 9AM - 6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. BAOQUOC N. TO Examiner Art Unit 2154 /BAOQUOC N TO/Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Apr 21, 2025
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596744
MULTIMODAL SEARCH ON WEARABLE SMART DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12561362
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Feb 24, 2026
Patent 12554716
TIME SERIES DATA QUERY METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12541501
SYSTEMS AND METHODS FOR UNIFIED DATA VALIDATION
2y 5m to grant Granted Feb 03, 2026
Patent 12504928
Determining Corrective Actions for a Storage Network Based on Event Records
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
98%
With Interview (+8.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 950 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month