Prosecution Insights
Last updated: April 19, 2026
Application No. 18/614,563

USING A TRAINED MODEL TO PREDICT AND PREVENT FAILED DELIVERIES

Non-Final OA §101§103§112
Filed
Mar 22, 2024
Examiner
CLARE, MARK C
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Maplebear Inc.
OA Round
5 (Non-Final)
13%
Grant Probability
At Risk
5-6
OA Rounds
2y 11m
To Grant
33%
With Interview

Examiner Intelligence

Grants only 13% of cases
13%
Career Allow Rate
20 granted / 152 resolved
-38.8% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
30 currently pending
Career history
182
Total Applications
across all art units

Statute-Specific Performance

§101
32.0%
-8.0% vs TC avg
§103
30.7%
-9.3% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
28.9%
-11.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 152 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the RCE filed on 12/08/2025. Claims 1, 5, 7, 10-12, 14, 17-18, and 20-21 have been amended and are hereby entered. Claims 1-8, 10-15, and 17-21 are currently pending and have been examined. Request for Continued Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/08/2025 has been entered. Response to Applicant’s Arguments Claim Rejections – 35 USC § 112 The present amendments to Claim 21 obviate the previous 112(b) rejection thereto; therefore, this rejection is withdrawn. Claim Rejections – 35 USC § 101 Applicant’s arguments regarding the 101 analysis have been considered and are unpersuasive. Applicant continues to repeat the same arguments containing the same misapprehension and plain meaning-based usage of “meaningful limits” as has been noted in multiple previous Office Actions for previous variations of the same argued functionality. See previous Office Actions and Interviews for more information on this standard, some content of which is repeated below. Further, Applicant continues to quote language from the MPEP Memorandum of August 4, 2025 and the Bilski case which were addressed in the Final Rejection of 10/09/2025, such quoted language discussing the existence of particular 101 subject matter eligibility standards (e.g., the particular machine standard, an argument thoroughly refuted in the previous Office Action despite the conclusory nature of this argument previously and presently) absent any meaningful attempt to demonstrate that the claims as presently amended meet such standards. Indeed, other than these quoted passages, the present 101 arguments contain almost no discussion of the present invention or relation of said present invention to these standards whatsoever, though despite this lack of meaningful content, Examiner continues to attempt to steelman these Remarks and address them below. These quotes and the conclusory statements made in relation thereto are no more persuasive now than when advanced previously, and remain unpersuasive for essentially the same reasons previously explained. See the Final Rejection of 10/09/2025 for more information. Further, Applicant continues to misapprehend how this argued functionality, even as presently amended, is analyzed under Step 2A, Prong One standards for essentially the same reasons as has been explained in previous Office Actions and Interviews. More specifically, Applicant seemingly believes that claiming the instruction and execution of delivery operations by way of an autonomous robot renders the entirety of this functionality non-abstract. This is not the case, particularly in the context of the claims as a whole. While the claimed “fully-autonomous robot” is indeed a non-abstract additional element, as are the computer elements which communicate the claimed collection and navigation instructions, these delivery instructions themselves, as well as their communication to and execution by a delivery agent, remain abstract ideas in view of the claims as a whole. As explained in previous Office Actions and interviews, merely claiming this delivery agent as “a/the fully-autonomous robot” does not make this otherwise. Relatedly, and as discussed previously, these results of the Step 2A, Prong One analysis is important to the remainder of the 101 subject matter eligibility analysis as integration into a practical application under Step 2A, Prong Two or embodiment of an inventive concept under Step 2B must occur by way of any recited additional elements (or the combination thereof) rather than the recited judicial exceptions (ie: here, abstract ideas). As the vast majority of the claims recite abstract ideas rather than additional elements, this leaves little here with which Applicant may make such Step 2A, Prong Two and 2B arguments. Further, while the above-discussed quotation of the Bilski case reference the particular machine standard, nothing in the present arguments as relates to the present invention appears to have anything to do with this standard, much less make a cogent argument as to why Applicant seemingly believes the present invention should be found to be eligible under this standard. Further regarding the particular machine standard, Examiner quotes the following language from MPEP 2106.05(b), the content of which was already discussed and explained to Applicant in the previous Office Action: “It is noted that while the application of a judicial exception by or with a particular machine is an important clue, it is not a stand-alone test for eligibility” (emphasis in original). Thus, Examiner again notes that even if Applicant’s unexplained conclusory statement regarding this standard was correct (which, to be clear, it is not as explained in the previous Office Action), more would be required to evidence subject matter eligibility under 101. Claim Rejections – 35 USC § 103 Examiner agrees that the claims as presently amended are not obvious over the prior art. See discussion of novel and non-obvious subject matter below for more information. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8, 10-15, and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claims 1, 14, and 20, the limitations of obtaining order data with information about an order placed by a user; retrieving user data with information about the user; responsive to obtaining the order data, assigning a fulfillment of the order to a delivery agent; upon assigning the fulfillment of the order, instructing the delivery agent to complete a fulfillment process for the order by collecting items of the order in a retailer location; physically collecting, by the delivery agent using the collection instructions, the items in the retailer location; upon collecting the items in the retailer location, controlling a movement of the delivery agent from the retailer location to a delivery location associated with the user; moving, along a navigation route identified using the navigation instructions, the delivery agent from the retailer location to the delivery location for delivering the items to the user at the delivery location; obtaining, during the fulfillment process for the order, fulfillment data associated with a corresponding stage of a plurality of stages of the fulfillment process for the order; accessing, during the corresponding stage of the fulfillment process, a delivery prediction model of the online system, wherein the delivery prediction model is trained to predict a likelihood of a delivery for the order ending up as a failed delivery in which the online system receives confirmation of delivery from the delivery agent but also receives a message from the user that delivery did not occur; applying, during the corresponding stage of the fulfillment process, the delivery prediction model to the order data, the user data, and the fulfillment data to generate the likelihood of the failed delivery for the order predicted during the corresponding stage of the fulfillment process; comparing the likelihood of the failed delivery to a threshold value; responsive to the likelihood of the failed delivery being greater than the threshold value, identifying a corresponding friction action associated with the corresponding stage of the fulfillment process to prevent an occurrence of the failed delivery for the order; applying, during the corresponding stage of the fulfillment process, corresponding friction action at the online system that causes a device associated with the delivery agent to display a user interface with content intended to prevent the occurrence of the failed delivery for the order, wherein, at one instance of the corresponding stage of the fulfillment process that represents a delivery stage of the fulfillment process, the content comprises an instruction for the delivery agent to take, using the device associated with the delivery agent, a specific number and types of pictures of a delivery of the order for confirming a successful delivery of the order; sending, to the user, a message with information about an identification number, wherein the sending causes the display a first user interface with a first message and the identification number, the first message prompting the user to communicate the identification number when the delivery agent delivers the order to the delivery location; receiving, from the user, the identification number; and responsive to receiving the identification number from the user, sending, to the delivery agent, a message, wherein the sending the message causes the display of a second message and the identification number, the second message prompting the delivery agent to communicate the identification number; receiving, from the delivery agent, a message including the identification number and an indication of a successful completion of the fulfillment process; responsive to receiving the message, generating a delivery result including the indication about the successful completion of the fulfillment process; and re-training the delivery prediction model by updating, using the delivery result, a set of parameters of the delivery prediction machine-learning model, as drafted, are processes that, under their broadest reasonable interpretations, cover certain methods of organizing human activity. For example, these limitations fall at least within the enumerated categories of commercial or legal interactions and/or managing personal behavior or relationships or interactions between people (see MPEP 2106.04(a)(2)(II)). Additionally, the limitations of obtaining order data with information about an order placed by a user; retrieving user data with information about the user; responsive to obtaining the order data, assigning a fulfillment of the order to a delivery agent; upon assigning the fulfillment of the order, instructing the delivery agent to complete a fulfillment process for the order by collecting items of the order in a retailer location; upon collecting the items in the retailer location, controlling a movement of the delivery agent from the retailer location to a delivery location associated with the user; obtaining, during the fulfillment process for the order, fulfillment data associated with a corresponding stage of a plurality of stages of the fulfillment process for the order; accessing, during the corresponding stage of the fulfillment process, a delivery prediction model of the online system, wherein the delivery prediction model is trained to predict a likelihood of a delivery for the order ending up as a failed delivery in which the online system receives confirmation of delivery from the delivery agent but also receives a message from the user that delivery did not occur; applying, during the corresponding stage of the fulfillment process, the delivery prediction model to the order data, the user data, and the fulfillment data to generate the likelihood of the failed delivery for the order predicted during the corresponding stage of the fulfillment process; comparing the likelihood of the failed delivery to a threshold value; responsive to the likelihood of the failed delivery being greater than the threshold value, identifying a corresponding friction action associated with the corresponding stage of the fulfillment process to prevent an occurrence of the failed delivery for the order; applying, during the corresponding stage of the fulfillment process, corresponding friction action at the online system that causes a device associated with the delivery agent to display a user interface with content intended to prevent the occurrence of the failed delivery for the order, wherein, at one instance of the corresponding stage of the fulfillment process that represents a delivery stage of the fulfillment process, the content comprises an instruction for the delivery agent to take, using the device associated with the delivery agent, a specific number and types of pictures of a delivery of the order for confirming a successful delivery of the order; sending, to the user, a message with information about an identification number, wherein the sending causes the display a first user interface with a first message and the identification number, the first message prompting the user to communicate the identification number when the delivery agent delivers the order to the delivery location; receiving, from the user, the identification number; and responsive to receiving the identification number from the user, sending, to the delivery agent, a message, wherein the sending the message causes the display of a second message and the identification number, the second message prompting the delivery agent to communicate the identification number; receiving, from the delivery agent, a message including the identification number and an indication of a successful completion of the fulfillment process; responsive to receiving the message, generating a delivery result including the indication about the successful completion of the fulfillment process; and re-training the delivery prediction model by updating, using the delivery result, a set of parameters of the delivery prediction machine-learning model, as drafted, are processes that, under their broadest reasonable interpretations, cover mental processes. For example, these limitations recite activity comprising observations, evaluations, judgments, and opinions (see MPEP 2106.04(a)(2)(III)). Additionally, the limitations of accessing, during each stage of the fulfillment process, a delivery prediction model of the online system, wherein the delivery prediction model is trained to predict a likelihood of a delivery for the order ending up as a failed delivery in which the online system receives confirmation of delivery from the delivery agent but also receives a message from the user that delivery did not occur; applying, during each stage of the fulfillment process, the delivery prediction model to the order data, the user data, and the fulfillment data to generate the likelihood of the failed delivery for the order predicted during each stage of the fulfillment process; comparing the likelihood of the failed delivery to a threshold value; and re-training the delivery prediction model by updating, using the delivery result, a set of parameters of the delivery prediction machine-learning model, as drafted, are processes that, under their broadest reasonable interpretations, cover mathematical concepts. For example, these limitations recite mathematical relationships and/or calculations (see MPEP 2106.04(a)(2)(I)). If a claim limitation, under its broadest reasonable interpretation, covers fundamental economic principles or practices, commercial or legal interactions, managing personal behavior or relationships, or managing interactions between people, it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind or with the aid of pen and paper but for recitation of generic computer components, it falls within the “Mental Processes” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers mathematical relationships, mathematical formulae or equations, or mathematical calculations, it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a computer system comprising a processor; a non-transitory computer-readable storage medium having instructions executable by the processor; an online system; a database; a fully-autonomous robot; collection and navigation instructions stored at the computer-readable medium and executed by the processor; a delivery prediction machine-learning model; a user interface; a network; a device associated with the user; a first user interface; a first user interface element; a device associated with the delivery agent; a second user interface; a second user interface element; and various signals. A computer system comprising a processor; a non-transitory computer-readable storage medium having instructions executable by the processor; a database; a fully-autonomous robot; collection and navigation instructions stored at the computer-readable medium and executed by the processor; a delivery prediction machine-learning model; a user interface; a network; a device associated with the user; a first user interface; a first user interface element; a device associated with the delivery agent; a second user interface; a second user interface element; and various signals amount to no more than mere instructions to apply a judicial exception (see MPEP 2106.05(f)). An online system amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). Accordingly, these additional elements do not integrate the abstract ideas into a practical application because they do not, individually or in combination, impose any meaningful limits on practicing the abstract ideas. The claims are therefore directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the judicial exception into a practical application, the additional elements amount to no more than mere instructions to apply a judicial exception, and generally linking the use of a judicial exception to a particular technological environment or field of use for the same reasons as discussed above in relation to integration into a practical application. These cannot provide an inventive concept. Therefore, when considering the additional elements alone and in combination, there is no inventive concept in the claims, and thus the claims are not patent eligible. Claims 2-8, 10-13, and 15-19, describing various additional limitations to the method of Claim 1 or the product of Claim 14, amount to substantially the same unintegrated abstract idea as Claims 1 and 14 (upon which these claims depend, directly or indirectly) and are rejected for substantially the same reasons. Claims 2 and 15 disclose applying the delivery prediction machine-learning model comprises applying a corresponding delivery prediction machine-learning sub-model of a plurality of delivery prediction machine-learning sub-models (mere instructions to apply a judicial exception) of the delivery prediction machine-learning model to the order data, the user data and the fulfillment data obtained during the corresponding stage of the fulfillment process to generate the likelihood of the failed delivery for the order (an abstract idea in the form of a certain method of organizing human activity, a mental process, and a mathematical concept); and the applying the corresponding friction action at the online system occurs during the corresponding stage of the fulfillment process to prevent the occurrence of the failed delivery for the order (further defining the abstract idea already set forth in Claim 1), which do not integrate the claims into a practical application. Claim 3 discloses receiving, from the device associated with the user and via the network, at least one of information about a number of items in the order, information about a time period between a placement of the order and a scheduled delivery for the order, information about a retailer associated with the online system, an initial monetary amount associated with the order, a maximum item unit price in the order, a maximum item price in the order, or a number of items in the order each having a monetary value between a first amount and a second amount (further defining the abstract idea already set forth in Claim 1), which does not integrate the claim into a practical application. Claim 4 discloses receiving, from the device associated with the user and via the network, at least one of information about a day of week when the order was placed, a time of day when the order was placed, or a time of day when the order is scheduled for delivery (further defining the abstract idea already set forth in Claim 1), which does not integrate the claim into a practical application. Claim 5 discloses retrieving, from the database, at least one of information about a type of the delivery location, a failed delivery rate for the delivery location, or information about one or more past orders having a delivery address at the delivery location (further defining the abstract idea already set forth in Claim 1), which does not integrate the claim into a practical application. Claim 6 discloses retrieving, from the database, at least one of information about a tenure of the user with the online system, information about one or more past orders placed by the user, or a rate of failed deliveries for the user (further defining the abstract idea already set forth in Claim 1), which does not integrate the claim into a practical application. Claim 7 discloses receiving, from the device associated with the delivery agent and via the network during the delivery stage of the fulfillment process, information about a time of the delivery of the order at the delivery location, information about a time of handoff of items in the order at the delivery location, and information on whether the delivery of the order is unattended (further defining the abstract idea already set forth in Claim 1), which does not integrate the claim into a practical application. Claim 8 discloses retrieving, from the database, information about the delivery agent assigned to the order (an abstract idea in the form of a certain method of organizing human activity and a mental process); and wherein applying the delivery prediction machine-learning model comprises applying the delivery prediction machine-learning model further to the information about the delivery agent to generate the likelihood of the failed delivery for the order (further defining the abstract idea already set forth in Claim 1), which do not integrate the claim into a practical application. Claims 10 and 18 disclose sending, via the network and to the device associated with the delivery agent, a third user interface signal (mere instructions to apply a judicial exception), wherein the sending the third user interface signal causes the device associated with the delivery agent to display a third user interface with one or more notification messages during the delivery stage of the fulfillment process prompting the delivery agent to accurately deliver the order to the delivery location (an abstract idea in the form of a certain method of organizing human activity and a mental process), which does not integrate the claim into a practical application. Claim 11 discloses assigning the delivery agent having a tenure with the online system longer than a threshold period to fulfill the order and deliver the order at the delivery location having a rate of failed deliveries higher than a threshold rate (an abstract idea in the form of a certain method of organizing human activity, a mental process, and a mathematical concept), which does not integrate the claim into a practical application. Claim 12 discloses assigning the delivery agent having a failed delivery rate lower than a first threshold rate to fulfill the order and deliver the order at the delivery location having a rate of failed deliveries higher than a second threshold rate (an abstract idea in the form of a certain method of organizing human activity, a mental process, and a mathematical concept), which does not integrate the claim into a practical application. Claims 13 and 19 disclose generating training data by gathering a random subset of historical data associated with successful deliveries of a first collection of orders and failed deliveries of a second collection of orders (an abstract idea in the form of a certain method of organizing human activity and a mental process); training the delivery machine-learning prediction model using the training data to generate the set of parameters of the delivery prediction machine-learning model (an abstract idea in the form of a certain method of organizing human activity, a mental process, and a mathematical concept); collecting feedback data with information on whether the order was successfully delivered to the user upon applying the corresponding friction action (an abstract idea in the form of a certain method of organizing human activity and a mental process); and re-training the delivery prediction machine-learning model by updating, using the collected feedback data, the set of parameters of the delivery prediction machine-learning model (an abstract idea in the form of a certain method of organizing human activity, a mental process, and a mathematical concept), which do not integrate the claims into a practical application. Claim 17 does not demonstrate integration into a practical application for the same reasons as for Claims 5-7. Claim 21 discloses applying, during a stage of the plurality of stages of the fulfillment process, a corresponding delivery prediction machine-learning sub-model of a plurality of delivery prediction machine-learning sub-models (mere instructions to apply a judicial exception) of the delivery prediction machine-learning model to the order data, the user data, and a portion of the fulfillment data obtained during the stage of the fulfillment process to generate a corresponding value for the likelihood of the failed delivery for the order during the stage of the fulfillment process (an abstract idea in the form of a certain method of organizing human activity, a mental process, and a mathematical concept), which does not integrate the claim into a practical application. Novel/Non-Obvious Subject Matter Claims 1-8, 10-15, and 17-21 contain novel and non-obvious subject matter. The following is a statement of reasons for the indication of novel and non-obvious subject matter: while various references (such as those previously cited) teach the elements of these claims in isolation, none of the prior art of record taken individually or in combination teach or suggest the specific series of logical operations of Claims 1, 14, and 20 in the context of systems and methods recited for delivery transportation logistics, monitoring, and authentication. Further, combining these references with the references presently cited as disclosing the limitations of the independent claims to reach these respective specific sequences of logical steps of logical operations would be unreasonable based on the number of and the manner of combining said references that would be required to do so. In particular, the prior art of record, taken individually or in combination, fails to teach or suggest the following limitation within the context of the claims as a whole: obtaining order data with information about an order placed by a user of an online system; retrieving, from a database of the online system, user data with information about the user; responsive to obtaining the order data, assigning a fulfillment of the order to a delivery agent that is a fully-autonomous robot; upon assigning the fulfillment of the order, instructing, via collection instructions stored at the computer-readable medium and executed by the processor, the delivery agent operating as the fully-autonomous robot to complete a fulfillment process for the order by collecting items of the order in a retailer location; physically collecting, by the delivery agent operating as the fully-autonomous robot and using the collection instructions, the items in the retailer location; upon collecting the items in the retailer location, controlling, via navigation instructions stored at the computer-readable medium and executed by the processor, a movement of the delivery agent operating as the fully-autonomous robot from the retailer location to a delivery location associated with the user; moving, along a navigation route identified using the navigation instructions, the delivery agent operating as the fully-autonomous robot from the retailer location to the delivery location for delivering the items to the user at the delivery location; obtaining, during the fulfillment process for the order, fulfillment data associated with a corresponding stage of a plurality of stages of the fulfillment process for the order; accessing, during the corresponding stage of the fulfillment process, a delivery prediction machine-learning model of the online system, wherein the delivery prediction machine-learning model is trained to predict a likelihood of a delivery for the order ending up as a failed delivery in which the online system receives confirmation of delivery from the delivery agent but also receives a message from the user that delivery did not occur; applying, during the corresponding stage of the fulfillment process, the delivery prediction machine-learning model to the order data, the user data, and the fulfillment data to generate the likelihood of the failed delivery for the order predicted during the corresponding stage of the fulfillment process; comparing the likelihood of the failed delivery to a threshold value; responsive to the likelihood of the failed delivery being greater than the threshold value, identifying a corresponding friction action associated with the corresponding stage of the fulfillment process to prevent an occurrence of the failed delivery for the order; applying, during the corresponding stage of the fulfillment process, the corresponding friction action at the online system that causes a device associated with the delivery agent to display a user interface with content intended to prevent the occurrence of the failed delivery for the order, wherein, at one instance of the corresponding stage of the fulfillment process that represents a delivery stage of the fulfillment process, the content comprises an instruction for the delivery agent to take, using the device associated with the delivery agent, a specific number and types of pictures of a delivery of the order for confirming a successful delivery of the order; sending, via a network and to a device associated with the user, a first user interface signal with information about an identification number, wherein the sending the first user interface signal causes the device associated with the user to display a first user interface with a first message, the identification number and a first user interface element, the first message prompting the user to enter the identification number using the first user interface element when the delivery agent delivers the order to the delivery location; receiving, via the network and from the device associated with the user, the identification number; responsive to receiving the identification number from the device associated with the user, sending, via the network and to the device associated with the delivery agent, a second user interface signal, wherein the sending the second user interface signal causes the device associated with the delivery agent to display a second user interface with a second message, the identification number, and a second user interface element, the second message prompting the delivery agent to enter the identification number using the second user interface element; receiving, via the network and from the device associated with the delivery agent, a signal including the identification number and an indication of a successful completion of the fulfillment process; responsive to receiving the signal, generating a delivery result signal including the indication about the successful completion of the fulfillment process; and re-training the delivery prediction machine-learning model by updating, using the delivery result signal, a set of parameters of the delivery prediction machine-learning model. Claims 2-8, 10-13, 5, 17-20, and 21 contain novel and non-obvious subject matter due to their dependence upon Claims 1 and 14 respectively. Discussion of Prior Art Cited but Not Applied For additional information on the state of the art regarding the claims of the present application, please see the following documents not applied in this Office Action (all of which are prior art to the present application): PGPub 20230267401 – “Methods and Systems for Mitigating Transportation Errors,” McKay, disclosing a system for detecting and mitigating errors in transportation operations PGPub 20150081343 – “System and Methods for Enabling Efficient Shipping and Delivery,” Streebin, disclosing a system for improving shipment efficiency by determining a risk of how likely it is for a package to be stolen, damaged, or lost, and taking steps to reduce this likelihood Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK C CLARE whose telephone number is (571)272-8748. The examiner can normally be reached Monday-Friday 6:30am-2:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Zimmerman can be reached at (571) 272-4602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARK C CLARE/Examiner, Art Unit 3628 /MICHAEL P HARRINGTON/Primary Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Mar 22, 2024
Application Filed
Dec 12, 2024
Non-Final Rejection — §101, §103, §112
Feb 05, 2025
Examiner Interview Summary
Feb 05, 2025
Applicant Interview (Telephonic)
Mar 03, 2025
Response Filed
Apr 07, 2025
Final Rejection — §101, §103, §112
May 27, 2025
Applicant Interview (Telephonic)
May 27, 2025
Examiner Interview Summary
May 29, 2025
Request for Continued Examination
Jun 03, 2025
Response after Non-Final Action
Jun 18, 2025
Non-Final Rejection — §101, §103, §112
Sep 05, 2025
Applicant Interview (Telephonic)
Sep 05, 2025
Examiner Interview Summary
Sep 11, 2025
Response Filed
Oct 01, 2025
Final Rejection — §101, §103, §112
Dec 08, 2025
Request for Continued Examination
Dec 17, 2025
Response after Non-Final Action
Jan 12, 2026
Non-Final Rejection — §101, §103, §112
Apr 08, 2026
Examiner Interview Summary
Apr 08, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597084
MOBILITY SCOOTER SHARING SYSTEM AND MANAGING METHOD FOR MOBILITY SCOOTER SHARING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12567082
Autonomous Smart Contract Execution Platform
2y 5m to grant Granted Mar 03, 2026
Patent 12480772
ROUTING RECOMMENDATION SYSTEM BASED ON USER ACTIVITIES
2y 5m to grant Granted Nov 25, 2025
Patent 12437243
SYSTEM AND METHOD FOR PROVIDING LOCATION-BASED APPOINTMENT OPERATIONS
2y 5m to grant Granted Oct 07, 2025
Patent 12367514
TAXI VEHICLE MANAGEMENT METHOD AND TAXI VEHICLE MANAGEMENT SYSTEM
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
13%
Grant Probability
33%
With Interview (+19.4%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 152 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month