Prosecution Insights
Last updated: April 19, 2026
Application No. 18/585,663

GENERATING MAPPING INFORMATION BASED ON IMAGE LOCATIONS

Non-Final OA §101§103§112
Filed
Feb 23, 2024
Examiner
THIRUGNANAM, GANDHI
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Doordash Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
413 granted / 559 resolved
+11.9% vs TC avg
Moderate +12% lift
Without
With
+12.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
42 currently pending
Career history
601
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
27.1%
-12.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 559 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim(s) 1 recite(s): “one or more processors; and one or more computer-readable media storing instructions executable to configure the one or more processors to perform operations including: “, which is directed to an additional element recited at a high level of generality. These amount to merely reciting the words “apply it” with the judicial exception. See MPEP 2106.05(g) “receiving, by the one or more processors, over time, from one or more agent devices and in association with a delivery location, a plurality of images and associated respective location data, the respective location data associated with at least one of the images of the plurality of images differing from the respective location data associated with at least one other one of the images of the plurality of images;”, which is an additional element directed to insignificant extra-solution activity. See MPEP 2015.05(g). “providing the plurality of images as inputs to a machine-learning model that is trained to determine whether individual images of the plurality of images include a threshold amount of information;” wherein the “machine learning model” is akin to using a computer as a tool to implement the abstract idea. The ML model is recited at a high level of generality without any specificity. These amount to merely reciting the words “apply it” with the judicial exception. See MPEP 2106.05(g) “based at least in part on the machine-learning model indicating that the individual images of the plurality of received images satisfy the threshold amount of information, ” which is directed to a mental process of visually inspecting each image and determining if it contains sufficient features such as a door number; “ determining, based on at least one of averaging or clustering of the respective location information associated with the plurality of images, a consensus location for the delivery location;” Averaging directed to a mathematical concept and clustering is directed to a mental process of determining if the location information is close together or not. Determining a consensus location for the delivery location is also a mental process. For example, a person could mentally determine if the location information was close to each other then set a consensus location based on the cluster analysis. “storing the consensus location information as mapping information associated with the delivery location.”, which is directed to insignificant post-solution activity. See MPEP 2106.05(g) This judicial exception is not integrated into a practical application. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 2 is directed to data gathering and does not add a practical application or significantly more. Claim 3-5 is directed to data gathering and error checking and does not add a practical application or significantly more. Claim 6 is directed to field of use and is not significantly more. See MPEP 2106.05(h) Claim 7 is directed to the mental process and is significantly more. Claims 8-14 fand 15-20 are rejected under similar grounds as claims 1-7 above. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4,5,11,12,18,19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 4-5,11-12 and 18-19 recites “the other image”. “the other image” lacks antecedent basis. Most likely this should refer back to the “the another image”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mishra (10,219,112) in view of Gil (2020/0317340). Mishra discloses 1. A system comprising: one or more processors; and one or more computer-readable media storing instructions executable to configure the one or more processors to perform operations including: (Mishra, Fig. 2A) receiving, by the one or more processors, over time, from one or more agent devices and in association with a delivery location, a plurality of images and associated respective location data, the respective location data associated with at least one of the images of the plurality of images differing from the respective location data associated with at least one other one of the images of the plurality of images; (Mishra, Fig. 4A shows various geoscans with varying coordinates around the delivery location; See additionally Col. 18 Lines 4-19 ) Mishra (Col. 5 lines 44-Col. 6 line 27) discloses along with each geoscan sending additional metadata “ and any relevant metadata (e.g., times or dates of such geoscans; a person, a vehicle or a machine associated with such geoscans; or a task being performed thereby)”, but doesn’t expressly disclose sending “images” as metadata. Gil discloses agent devices (UAVs with cameras and GPS, see paragraph 171) to acquire and send images at the delivery locations (Gil, Fig. 68, #6806-6808, paragraph 455 “The method 6800 further comprises receiving one or more of a photo or video captured by the camera of the UAV, the one or more of the photo or the video indicative of the release of the parcel at the serviceable point, as shown at block 6806. The photo and/or video may be captured concurrently with or subsequent to the notification that the parcel is dis-engaged …”; paragraph 457, “[0457] At block 6808, the method 6800 comprises communicating a confirmation of delivery of the parcel to a user computing entity based on the notification that the parcel is disengaged, wherein the confirmation includes the one or more of the photo or the video.”) as well as metadata (Gil, paragraph 457, “[0458] Generally, metadata is generated when the photo and/or video are captured and digitally converted into an image. With regard to digital images, metadata provides information in addition to the image data itself, such that the additional data travels with the image data. The metadata may include location data, address data, unique identifiers, file type, data type, file size, data quality, a source of the data, a caption, a tag, a date and time of creation, one or more properties, a name, a color depth, an image resolution, an image size, and the like….”). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to replace Mishra’s geoscan + metadata with Gil’s image + metadata (which includes location data). The suggestion/motivation for doing so would have been it gives the user more data to analyze if the delivery was successful or not.. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Mishra in view of Gil discloses validating the geoscan and metadata (Mishra, Col. 8 lines 23-44, “When a geoscan is received, the geoscan may be coarsely filtered by or validated against information regarding the given location or a task to be performed there. If the geoscan is determined to be valid, the geoscan may be compared against any known points or regions in space, e.g., routing points or delivery points, in order to determine whether updating one of the points or regions in space to include the geoscan would reduce a level of uncertainty associated with the point or region so updated.”) but Mishra in view of Gil fails to disclose “providing the plurality of images as inputs to a machine-learning model that is trained to determine whether individual images of the plurality of images include a threshold amount of information;” Gil further discloses “providing the plurality of images as inputs to a machine-learning model that is trained to determine whether individual images of the plurality of images include a threshold amount of information;” (Gil, paragraph 459 “[0459] In further embodiments, the methods described hereinafter with regard to image (photo or video) matching may be employed by the enhanced parcel system when determining whether to generate and communicate a delivery confirmation as well. For example, the one or more of the photo and/or video may be verified by identifying elements (e.g., shapes, colors, sizes of elements within the image relative to one another, spacing of elements within the image relative to one another, angles), structures (e.g., roof, porch, stoop, door, window, stairs, fence, wrought iron, window shutters), textures (e.g., concrete, brick, natural stone, siding, tiles), and/or alphanumeric characters (e.g., house numbers, mailbox numbers, “Welcome” doormat, sequence of alphanumeric characters) within the photo and/or video data and matching the element, structures, textures, and and/or alphanumeric characters to one or more reference images stored in a database, wherein the one or more reference images are associated or linked to the serviceable point at issue, and/or are associated or linked to a geographic (e.g., GPS) location used to define the target serviceable point.”; paragraph 485, “[0485] Additionally or alternatively, a convoluted neural network may be leveraged to account for changes to the appearance of the serviceable point, for example, when determining whether the captured photo and/or video are a match to a stored image. ”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to further use Gils validation technique on the metadata (images) of Mishra in view of Gil’s data. The suggestion/motivation for doing so would have been it gives the user validated data, which should be more accurate. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Mishra in view of Gil discloses based at least in part on the machine-learning model indicating that the individual images of the plurality of received images satisfy the threshold amount of information, determining, by the one or more processors, based on at least one of averaging1 or clustering of the respective location information associated with the plurality of images, a consensus location for the delivery location; and (Mishra, Col. 7 lines 36-67, “The systems and methods of the present disclosure are directed to determining preferred points or regions at a given location, such as routing points and/or delivery points, for the performance of a given task. In some embodiments, routing points and/or delivery points may be defined based at least in part on geolocation estimation techniques which determine probability distributions, e.g., according to a Gaussian location hypothesis or other methods or techniques for modeling errors or uncertainty, of sensed positions at a location and any levels of uncertainty associated with the distributions, and group the sensed positions into one or more hypothetical location clusters, e.g., location hypotheses or areas of uncertainty.”) storing, by the one or more processors, the consensus location information as mapping information associated with the delivery location. (Mishra, Col. 17 lines 43-54, “At box 390, the geoscan for the item and the task is stored in at least one data store, along with the metadata determined at box 380, and the process ends.”) Mishra in view of Gil discloses 2. The system as recited in claim 1, the operations further comprising: receiving a request for an item for delivery to the delivery location;(Mishra, Fig. 7 #720, “ PNG media_image1.png 172 518 media_image1.png Greyscale ”) and retrieving the consensus location as the mapping information associated with the delivery location for use in generating a map to present on an agent device for delivery of the item to the delivery location.(Mishra, Fig. 7, #780-790, PNG media_image2.png 130 510 media_image2.png Greyscale , where the consensus locations are determined and stored in 710 PNG media_image3.png 72 486 media_image3.png Greyscale ) Mishra in view of Gil discloses 3. The system as recited in claim 1, the operations further comprising: receiving feedback indicating that an item associated with one of the received images of the plurality of images was not received or was in a wrong location; and removing the associated respective location data from being associated with the delivery location when determining the consensus location. (Mishra, Fig. 3, #310-350 discloses disregarding coordinates when coordinates are not validated for a task PNG media_image4.png 272 504 media_image4.png Greyscale ) Mishra in view of Gil discloses 4. The system as recited in claim 1, the operations further comprising: receiving another image and associated respective location data for a different delivery location; providing the other image as input to the machine-learning model; and based at least in part on an output of the machine-learning model indicating that the other image fails to satisfy the threshold amount of information, sending, by the one or more processors, to an agent device that sent the other image, an instruction to capture an additional image corresponding to the different delivery location.(Gil, Fig. 70, PNG media_image5.png 546 574 media_image5.png Greyscale , discloses acquiring an image, comparing the image to determine if it was delivered correctly (adverse delivery event, see claim 1 for neural network) and in the event that the package was misdelivererd sending the/another uav to retrieve the package which includes photographing the delivery location . See Fig. 71 PNG media_image6.png 460 416 media_image6.png Greyscale ) Mishra in view of Gil discloses 5. The system as recited in claim 1, the operations further comprising: receiving another image and associated respective location data for a different delivery location; providing the other image as input to the machine-learning model; and based at least in part on an output of the machine-learning model indicating that the other image fails to satisfy the threshold amount of information, excluding the respective location data associated with the other received image from being associated with mapping information for the different delivery location. (see claim 1 and claim 3, where the process of claim 1 is repeated for numerous packages) Mishra in view of Gil discloses 6. The system as recited in claim 1, the operations further comprising training the machine learning model using a plurality of images of past delivery locations for a plurality of past deliveries to densely populated structures. (Gil, paragraph 485, “[0485] Additionally or alternatively, a convoluted neural network may be leveraged to account for changes to the appearance of the serviceable point, for example, when determining whether the captured photo and/or video are a match to a stored image. For example, regarding a serviceable point, the colors of exterior paint color a house or apartment building, the colors of architectural features of a house or apartment building (e.g., shutters, front door, roof shingles, trim), and/or the colors of landscaping features at the serviceable point (e.g., lack of foliage in winter, autumn leaf color changes, spring flowering) may be changed from time to time, or from season to season. Further, regarding a serviceable point, the colors of a house or apartment building, architectural features, and/or landscaping features may appear to be visually different depending on lighting changes or lighting fluctuations resulting from weather conditions (e.g., sunny conditions, overcast or cloudy conditions, stormy and rainy conditions, foggy conditions), time of day or night, and seasons (e.g., time of sunrise and sunset being affected by seasonality). Even further, regarding a serviceable point, the appearance of the house or apartment building may fluctuate and/or be obscured by the addition of a mailbox, the addition or removal of fencing, and/or holiday decorations, for example.”) Mishra in view of Gil discloses 7. The system as recited in claim 1, wherein the threshold amount of information includes a delivered item and at least one of an entrance portion, a door portion, or a unit number. (Gil, paragraph 485, “[0485] Additionally or alternatively, a convoluted neural network may be leveraged to account for changes to the appearance of the serviceable point, for example, when determining whether the captured photo and/or video are a match to a stored image. For example, regarding a serviceable point, the colors of exterior paint color a house or apartment building, the colors of architectural features of a house or apartment building (e.g., shutters, front door, roof shingles, trim), and/or the colors of landscaping features at the serviceable point (e.g., lack of foliage in winter, autumn leaf color changes, spring flowering) may be changed from time to time, or from season to season. Further, regarding a serviceable point, the colors of a house or apartment building, architectural features, and/or landscaping features may appear to be visually different depending on lighting changes or lighting fluctuations resulting from weather conditions (e.g., sunny conditions, overcast or cloudy conditions, stormy and rainy conditions, foggy conditions), time of day or night, and seasons (e.g., time of sunrise and sunset being affected by seasonality). Even further, regarding a serviceable point, the appearance of the house or apartment building may fluctuate and/or be obscured by the addition of a mailbox, the addition or removal of fencing, and/or holiday decorations, for example.”) Claims 8-14 and 15-20 are rejected under similar grounds as claims 1-7 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GANDHI THIRUGNANAM whose telephone number is (571)270-3261. The examiner can normally be reached M-F 8:30-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GANDHI THIRUGNANAM/ Primary Examiner, Art Unit 2672 1 Col. 20 lines 16-36 discloses “mean” and “std. deviation”
Read full office action

Prosecution Timeline

Feb 23, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597135
SYSTEMS AND METHODS FOR UPDATING A GRAPHICAL USER INTERFACE BASED UPON INTRAOPERATIVE IMAGING
2y 5m to grant Granted Apr 07, 2026
Patent 12561963
CROSS-MODALITY NEURAL NETWORK TRANSFORM FOR SEMI-AUTOMATIC MEDICAL IMAGE ANNOTATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555291
METHOD FOR AUTOMATED REGULARIZATION OF HYBRID K-SPACE COMBINATION USING A NOISE ADJUSTMENT SCAN
2y 5m to grant Granted Feb 17, 2026
Patent 12541869
GRAIN FLAKE MEASUREMENT SYSTEM, GRAIN FLAKE MEASUREMENT METHOD, AND GRAIN FLAKE COLLECTION, MOVEMENT, AND MEASUREMENT SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12525007
TRAINING METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+12.3%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 559 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month