Prosecution Insights
Last updated: April 19, 2026
Application No. 18/244,445

Multi-Computer System for Dynamic Fraud Mapping Interface Generation

Non-Final OA §101
Filed
Sep 11, 2023
Examiner
KIM, PATRICK
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BANK OF AMERICA CORPORATION
OA Round
3 (Non-Final)
26%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
60%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
81 granted / 307 resolved
-25.6% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
38 currently pending
Career history
345
Total Applications
across all art units

Statute-Specific Performance

§101
38.8%
-1.2% vs TC avg
§103
36.2%
-3.8% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§101
DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 30, 2026, has been entered. In the response filed March 30, 2026, the Applicant amended claims 1, 11, and 19. Claims 1, 3, 5-11, 13, 15-19, and 21, are pending in the current application. Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments for claims 1, 3, 5-11, 13, 15-19, and 21, with respect to the 35 U.S.C. 101 rejection have been considered but are unpersuasive. Applicant argues that the claims are not directed to a judicial exception. Examiner respectfully disagrees. Here, under broadest reasonable interpretation, the steps describe or set-forth receiving fraud reporting data and identifying compromised locations with compromised payment terminals on a map, which amounts to concepts performed in the human mind (including an observation, evaluation, judgment, opinion). These limitations therefore fall within the “mental processes” subject matter grouping of abstract ideas. Applicant argues that the claims are not directed to a judicial exception as they recite a practical application of the abstract idea by reciting limitations to train and execute a machine learning model. Examiner respectfully disagrees. The requirement to execute the claimed steps/functions using “train, using historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals;” and “continuously update, using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of identified compromised terminals,” (claims 1 and 19); and “training, by a computing platform, the computing platform having at least one processor and memory and historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals;” and “continuously updating, by the at least one processor and using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of the identified compromised terminals” (claim 11), is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application. See § MPEP 2106.05(f). Here, the data used to train the machine learning model is a part of the fraud reporting data incorporated in the abstract idea. The training and continuously updating oof the machine learning model is mere instructions to implement the abstract idea on a generic computer. Applicant argues that the present claims are analogous to 2019 PEG example 37 as the additional elements recite a specific improvement over prior art systems and that the claims as a whole integrates the method of organizing human activity into a practical application. Examiner respectfully disagrees. Example 37 of the 2019 PEG discusses a case that automatically moved icons to positions within a GUI based on the determined amount of use. Here, the icons are placed on a map and displayed according to location and indicates the information that was output based on the data input into the system and machine learning model. The present claims do not disclose any rearranging of icons within a GUI, nor do they recite any limitations regarding rearranging icons within an interface and as such, example 37 is not analogous to the present amended claims. Applicant’s arguments remain unpersuasive as the ordered combination of claim elements (i.e., the claims as a whole) are not directed to an improvement to computer functionality/capabilities, an improvement to a computer-related technology or technological environment, and do not amount to a technology-based solution to a technology-based problem. Further, when considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus append the abstract idea with insignificant extra solution activity associated with the implementation of the judicial exception, (e.g., mere data gathering, post-solution activity). The 35 U.S.C. 101 rejection is hereby maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3, 5-11, 13, 15-19, and 21, are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Claims 1, 3, and 5-10 are drawn to a machine; claims 11, 13, and 15-18 are drawn to a process; and claims 19 and 21 are drawn to a product of manufacture, each of which is within the four statutory categories (e.g., a process, a machine). (Step 1: YES). Step 2A – Prong One: In prong one of step 2A, the claims are analyzed to evaluate whether they recite a judicial exception. Claim 1 (representative of independent claims 11 and 19) recites/describes the following steps: “receive the fraud reporting data, wherein the fraud reporting data includes a plurality of incidents of potentially fraudulent activity reported by a plurality of users;” “responsive to receiving at least a threshold amount of fraud reporting data for a particular geographic location, analyze the fraud reporting data to identify one or more compromised locations including one or more compromised payment terminals, wherein analyzing the fraud reporting data includes… using, as inputs, the fraud reporting data, to output the one or more compromised payment terminals;” “…identifying each compromised location payment terminal of the one or more compromised payment terminals on a map of the particular geographical location…” These steps, under broadest reasonable interpretation, describe or set-forth receiving fraud reporting data and identifying compromised locations with compromised payment terminals on a map, which amounts to concepts performed in the human mind (including an observation, evaluation, judgment, opinion). These limitations therefore fall within the “mental processes” subject matter grouping of abstract ideas. As such, the examiner concludes that claim 1 recites an abstract idea (Step 2A – Prong One: YES). Each of the depending claims 3, 5-10, 13, 15-18, and 21, likewise recite/describe these steps (by incorporation - and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and these claims are therefore determined to recite an abstract idea under the same analysis. Any elements recited in a dependent claim that are not specifically identified/addressed by the examiner under step 2A (prong two) or step 2B of this analysis shall be understood to be an additional part of the abstract idea recited by that particular claim. Step 2A – Prong Two: The claims recite the additional elements/limitations of: “a computing platform, comprising: at least one processor; a communication interface communicatively coupled to the at least one processor; and a memory storing computer-readable instructions,” (claim 1); “a computing platform, the computing platform having at least one processor and memory,” (claim 11); “one or more non-transitory computer-readable media storing instructions that, when executed by a computing platform comprising at least one processor, memory, and a communication interface,” (claim 19); “executing a machine learning model,” “an interactive fraud mapping interface,” “a user computing device,” “a display of the user computing device,” and “the trained machine learning model,” (claims 1, 11, and 19); “a fraud details interface (claims 5 and 15); “an external entity computing device,” (claim 9). The requirement to execute the claimed steps/functions using “a computing platform, comprising: at least one processor; a communication interface communicatively coupled to the at least one processor; and a memory storing computer-readable instructions,” (claim 1); “a computing platform, the computing platform having at least one processor and memory,” (claim 11); “one or more non-transitory computer-readable media storing instructions that, when executed by a computing platform comprising at least one processor, memory, and a communication interface,” (claim 19); “executing a machine learning model,” “an interactive fraud mapping interface,” “a user computing device,” “a display of the user computing device,” and “the trained machine learning model,” (claims 1, 11, and 19); “a fraud details interface (claims 5 and 15); “an external entity computing device,” (claim 9), is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application. See § MPEP 2106.05(f). The claims also recite the additional elements/limitations of: “train, historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals;” and “continuously update, using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of identified compromised terminals,” (claims 1 and 19); and “training, by a computing platform, the computing platform having at least one processor and memory and using historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals;” and “continuously updating, by the at least one processor and using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of identified compromised terminals,” (claim 11). The requirement to execute the claimed steps/functions using “train, historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals;” and “continuously update, using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of identified compromised terminals,” (claims 1 and 19); and “training, by a computing platform, the computing platform having at least one processor and memory and using historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals;” and “continuously updating, by the at least one processor and using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of identified compromised terminals,” (claim 11), is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application. See § MPEP 2106.05(f). The claims also recite the additional elements/limitations of: “generate, based on the one or more compromised payment terminals, an interactive fraud mapping interface, wherein the interactive fraud mapping interface includes an interactive icon … wherein a displayed size of the interactive icon identifying each compromised payment terminal indicates a number of fraudulent incidents at a respective payment terminal;” “transmit, to a user computing device, the interactive fraud mapping interface, wherein transmitting the interactive fraud mapping interface to the user computing device causes the user computing device to display the interactive fraud mapping interface on a display of the user computing device;” (claims 1 and 19), “generating, by the at least one processor and based on the one or more compromised payment terminals, an interactive fraud mapping interface, wherein the interactive fraud mapping interface includes an interactive icon …wherein a displayed size of the interactive icon identifying each compromised payment terminal indicates a number of fraudulent incidents at a respective payment terminal;” “transmitting, by the at least one processor and to a user computing device, the interactive fraud mapping interface, wherein transmitting the interactive fraud mapping interface to the user computing device causes the user computing device to display the interactive fraud mapping interface on a display of the user computing device,” (claim 11); “transmit the fraud details interface to the user computing device wherein transmitting the fraud details interface to the user computing device causes the user computing device to display the fraud details interface on the display of the user computing device,” (claim 5); “transmit, to an external entity computing device associated with the compromised location, the notification, wherein transmitting the notification causes the external entity computing device to display the notification on a display of the external entity computing device,” (claim 9); and “transmitting, by the at least one processor, the fraud details interface to the user computing device wherein transmitting the fraud details interface to the user computing device causes the user computing device to display the fraud details interface on the display of the user computing device,” (claim 15). The requirement to execute the claimed steps/functions using “generate, based on the one or more compromised payment terminals, an interactive fraud mapping interface, wherein the interactive fraud mapping interface includes an interactive icon … wherein a displayed size of the interactive icon identifying each compromised payment terminal indicates a number of fraudulent incidents at a respective payment terminal;” “transmit, to a user computing device, the interactive fraud mapping interface, wherein transmitting the interactive fraud mapping interface to the user computing device causes the user computing device to display the interactive fraud mapping interface on a display of the user computing device;” (claims 1 and 19), “generating, by the at least one processor and based on the one or more compromised payment terminals, an interactive fraud mapping interface, wherein the interactive fraud mapping interface includes an interactive icon …wherein a displayed size of the interactive icon identifying each compromised payment terminal indicates a number of fraudulent incidents at a respective payment terminal;” “transmitting, by the at least one processor and to a user computing device, the interactive fraud mapping interface, wherein transmitting the interactive fraud mapping interface to the user computing device causes the user computing device to display the interactive fraud mapping interface on a display of the user computing device,” (claim 11); “transmit the fraud details interface to the user computing device wherein transmitting the fraud details interface to the user computing device causes the user computing device to display the fraud details interface on the display of the user computing device,” (claim 5); “transmit, to an external entity computing device associated with the compromised location, the notification, wherein transmitting the notification causes the external entity computing device to display the notification on a display of the external entity computing device,” (claim 9); and “transmitting, by the at least one processor, the fraud details interface to the user computing device wherein transmitting the fraud details interface to the user computing device causes the user computing device to display the fraud details interface on the display of the user computing device,” (claim 15), even if considered to be an “additional” element for the purpose of the eligibility analysis, would simply append insignificant extra-solution activity to the judicial exception, (e.g., mere post-solution activity in conjunction with an abstract idea). The term “extra-solution activity” is understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. The recited additional elements are deemed “extra-solution” because such data gathering and solution-outputting/transmission steps have long been held to be insignificant pre/post-solution activity. These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application. See § MPEP 2106.05(h) and (g). Remaining dependent claims 3, 6-8, 10, 13, 16-18, and 21, either recite the same additional elements as noted above or fail to recite any additional elements (in which case, note prong one analysis as set forth above – those claims are further part of the abstract idea as identified by the examiner for each respective dependent claim). The examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea (Step 2A – Prong two: NO). Step 2B: As discussed above in “Step 2A – Prong 2,” the requirement to execute the claimed steps/functions using “a computing platform, comprising: at least one processor; a communication interface communicatively coupled to the at least one processor; and a memory storing computer-readable instructions,” (claim 1); “a computing platform, the computing platform having at least one processor and memory,” (claim 11); “one or more non-transitory computer-readable media storing instructions that, when executed by a computing platform comprising at least one processor, memory, and a communication interface,” (claim 19); “executing a machine learning model,” “an interactive fraud mapping interface,” “a user computing device,” “a display of the user computing device,” and “the trained machine learning model,” (claims 1, 11, and 19); “a fraud details interface (claims 5 and 15); “an external entity computing device,” (claim 9), is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. These limitations therefore do not qualify as “significantly more.” See MPEP § 2106.05(f). As discussed above in “Step 2A – Prong 2,” the requirement to execute the claimed steps/functions using “train, historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals;” and “continuously update, using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of identified compromised terminals,” (claims 1 and 19); and “training, by a computing platform, the computing platform having at least one processor and memory and using historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals;” and “continuously updating, by the at least one processor and using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of identified compromised terminals,” (claim 11), is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. These limitations therefore do not qualify as “significantly more.” See MPEP § 2106.05(f). As discussed above in “Step 2A – Prong 2”, the recited additional elements of “generate, based on the one or more compromised payment terminals, an interactive fraud mapping interface, wherein the interactive fraud mapping interface includes an interactive icon … wherein a displayed size of the interactive icon identifying each compromised payment terminal indicates a number of fraudulent incidents at a respective payment terminal;” “transmit, to a user computing device, the interactive fraud mapping interface, wherein transmitting the interactive fraud mapping interface to the user computing device causes the user computing device to display the interactive fraud mapping interface on a display of the user computing device;” (claims 1 and 19), “generating, by the at least one processor and based on the one or more compromised payment terminals, an interactive fraud mapping interface, wherein the interactive fraud mapping interface includes an interactive icon …wherein a displayed size of the interactive icon identifying each compromised payment terminal indicates a number of fraudulent incidents at a respective payment terminal;” “transmitting, by the at least one processor and to a user computing device, the interactive fraud mapping interface, wherein transmitting the interactive fraud mapping interface to the user computing device causes the user computing device to display the interactive fraud mapping interface on a display of the user computing device,” (claim 11); “transmit the fraud details interface to the user computing device wherein transmitting the fraud details interface to the user computing device causes the user computing device to display the fraud details interface on the display of the user computing device,” (claim 5); “transmit, to an external entity computing device associated with the compromised location, the notification, wherein transmitting the notification causes the external entity computing device to display the notification on a display of the external entity computing device,” (claim 9); and “transmitting, by the at least one processor, the fraud details interface to the user computing device wherein transmitting the fraud details interface to the user computing device causes the user computing device to display the fraud details interface on the display of the user computing device,” (claim 15), even if considered to be an “additional” element for the purpose of the eligibility analysis, would simply append insignificant extra-solution activity to the judicial exception, (e.g., mere post-solution activity in conjunction with an abstract idea). These additional element(s), taken individually or in combination, additionally amount to well-understood, routine and conventional activities previously known to the industry, specified at a high level of generality, appended to the judicial exception. These additional elements, taken individually or in combination, are well-understood, routine and conventional to those in the field of user interfaces and mapping interfaces. These limitations therefore do not qualify as “significantly more.” See MPEP § 2106.05(d). This conclusion is based on a factual determination. The determination that receiving data/messages over a network is well-understood, routine, and conventional is supported by Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362; TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), and MPEP 2106.05(d)(II), which note the well-understood, routine, conventional nature of receiving data/messages over a network. Viewing the additional limitations in combination also shows that they fail to ensure the claims amount to significantly more than the abstract idea. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer, generally link the abstract idea to a particular technological environment or field of use, append the abstract idea with insignificant extra solution activity associated with the implementation of the judicial exception, (e.g., mere data gathering, post-solution activity), and appended with well-understood, routine and conventional activities previously known to the industry. Remaining dependent claims 3, 6-8, 10, 13, 16-18, and 21, either recite the same additional elements as noted above or fail to recite any additional elements (in which case, note prong one analysis as set forth above – those claims are further part of the abstract idea as identified by the examiner for each respective dependent claim). The examiner has therefore determined that no additional element, or combination of additional claims elements is/are sufficient to ensure the claims amount to significantly more than the abstract idea identified above (Step 2B: NO). Allowable Subject Matter Claims 1, 2, and 4-8 would be allowable subject matter if revised and amended to overcome the rejection under 35 U.S.C. 101 as set forth in this Office action. The closest prior art of record was indicated in the Office action mailed December 31, 2025. As per claim 1 (representative of claims 11 and 19), the closest prior art of record taken either individually or in combination with other prior art of record fails to teach or suggest “train, using historical fraud reporting data including previously reported incidents of fraud, outcomes of investigations associated with the incidents, locations associated with incidents or geographic areas near incidents, a machine learning model to identify correlations between reported potentially fraudulent activity and one or more compromised or potentially compromised payment terminals; receive the fraud reporting data, wherein the fraud reporting data includes a plurality of incidents of potentially fraudulent activity reported by a plurality of users; responsive to receiving at least a threshold amount of fraud reporting data for a particular geographic location, analyze the fraud reporting data to identify one or more compromised locations including one or more compromised payment terminals, wherein analyzing the fraud reporting data includes executing the trained machine learning model using, as inputs, the fraud reporting data, to output the one or more compromised payment terminals; generate, based on the one or more compromised payment terminals, an interactive fraud mapping interface, wherein the interactive fraud mapping interface includes an interactive icon identifying each compromised payment terminal of the one or more compromised payment terminals on a map of the particular geographical location and wherein a displayed size of the interactive icon identifying each compromised payment terminal indicates a number of fraudulent incidents at a respective payment terminal; transmit, to a user computing device, the interactive fraud mapping interface, wherein transmitting the interactive fraud mapping interface to the user computing device causes the user computing device to display the interactive fraud mapping interface on a display of the user computing device; and continuously update, using a dynamic feedback loop and based on the fraud reporting data, the machine learning model to continuously improve accuracy of identified compromised terminals.” This combination of functions/features would not have been obvious to a PHOSITA in view of the prior art. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Patrick Kim whose telephone number is (571)272-8619. The examiner can normally be reached Monday - Friday, 9AM - 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached at (571)272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Patrick Kim/Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Sep 11, 2023
Application Filed
Jun 14, 2025
Non-Final Rejection — §101
Sep 16, 2025
Response Filed
Dec 27, 2025
Final Rejection — §101
Feb 25, 2026
Response after Non-Final Action
Mar 30, 2026
Request for Continued Examination
Mar 31, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572954
METHODS AND APPARATUS FOR DETERMINING ITEM DEMAND AND PRICING USING MACHINE LEARNING PROCESSES
2y 5m to grant Granted Mar 10, 2026
Patent 12505465
METHOD AND ARTICLE OF MANUFACTURE FOR A FAIR MARKETPLACE FOR TIME-SENSITIVE AND LOCATION-BASED DATA
2y 5m to grant Granted Dec 23, 2025
Patent 12499390
MULTIMODAL MOBILITY FACILITATING SEAMLESS RIDERSHIP WITH SINGLE TICKET
2y 5m to grant Granted Dec 16, 2025
Patent 12481932
SYSTEMS AND METHODS FOR IMMEDIATE MATCHING OF REQUESTOR DEVICES TO PROVIDER DEVICES
2y 5m to grant Granted Nov 25, 2025
Patent 12462215
UNIFIED VIEW OPERATOR INTERFACE SYSTEM AND METHOD
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
26%
Grant Probability
60%
With Interview (+33.3%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month