Prosecution Insights
Last updated: April 19, 2026
Application No. 18/490,023

ARTIFICIAL-INTELLIGENCE-ENABLED INFORMATION SECURITY TECHNIQUES BASED ON USER METADATA

Non-Final OA §101§103
Filed
Oct 19, 2023
Examiner
KANG, IRENE S
Art Unit
3696
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mastercard Technologies Canada Ulc
OA Round
3 (Non-Final)
16%
Grant Probability
At Risk
3-4
OA Rounds
6y 1m
To Grant
42%
With Interview

Examiner Intelligence

Grants only 16% of cases
16%
Career Allow Rate
37 granted / 224 resolved
-35.5% vs TC avg
Strong +26% interview lift
Without
With
+26.0%
Interview Lift
resolved cases with interview
Typical timeline
6y 1m
Avg Prosecution
16 currently pending
Career history
240
Total Applications
across all art units

Statute-Specific Performance

§101
35.5%
-4.5% vs TC avg
§103
33.4%
-6.6% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 224 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments and Arguments As to the rejection of Claims 1-6 and 11-16 under 35 U.S.C. § 101, Applicant’s arguments and amendments have been fully considered but are not persuasive. Applicant argues that the instant claims do not merely recite a judicial exception but instead integrates the judicial exception into a practical application. The claims are directed to using a first set of metadata and historical data and using a model to find a match. These therefore do fall under the categories of “organizing human activity”. It does not integrate the abstract idea into a practical application, but rather is receiving and processing data and then outputting the results which merely adds the words “apply it” through use of a generic computer. This does not go beyond “apply it” and thus simply relying on a computer to perform routine tasks or calculations more quickly or more accurately is insufficient to render a claim patent eligible. See Alice, 134 S. Ct. at 2359 (“use of a computer to create electronic records, track multiple transactions, and issue simultaneous instructions” is not an inventive concept); Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Can. (U.S.), 687 F.3d 1266, 1278 (Fed. Cir. 2012) (a computer “employed only for its most basic function . . . does not impose meaningful limits on the scope of those claims”); cf. DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258–59 (Fed. Cir. 2014) (finding a computer-implemented method patent eligible where the claims recite a specific manipulation of a general-purpose computer such that the claims do not rely on a “computer network operating in its normal, expected manner”). Examiner argues that the claims do not amount to significantly more because the limitations, in effect, merely add the words “apply it” to the “the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). The additional elements do not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment.” Examiner maintains that in the current claims, the computer is used as a tool to implement the abstract idea. The rejection is thereby maintained. As to the rejection of claims 1-6 and 11-16 under 35 U.S.C. § 103, Applicant's arguments and amendments have been fully considered but are moot given the new grounds of rejection of the claims as amended. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6 and 11-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., abstract idea) without significantly more. Step 1 Claims 1-6 recite a system including memory hardware and one or more electronic processors; claims 11-16 recite a method. Both are statutory categories of invention (Step 1: Yes) Step 2A Prong 1 Under this step of the analysis, it must be determined whether the claims recite an abstract idea that falls within one or more designated categories of patent ineligible subject matter (i.e., organizing human activity, mathematical concepts, and mental processes) that amount to a judicial exception to patentability. Here, the independent system claim 1 recites the abstract idea of: generate historical behavioral biometric metadata…training a…model using the historical profile…generate first behavioral biometric metadata… receiving a transaction request…, the transactional request including first behavioral biometric metadata; providing the first behavioral biometric metadata and the historical behavioral biometric metadata to the trained…model to generate a biometric match; generating a control signal based on the biometric match. The limitations above recite an abstract idea that falls within one or more of the three enumerated categories of patent ineligible subject matter, to wit: certain methods of organizing human activity which includes subcategories of fundamental economic practices or principles (i.e., risk mitigation that shows receiving a transaction request and taking steps to authenticate the user based on an analysis of metadata associated with the user utilizing a model that yields outputs that are then compared). Thus the claim recites an abstract idea. (Step 2A Prong 1: Yes) Step 2A Prong 2 Under this step, recited additional elements are evaluated to determine whether they provide an integration of the recited abstract idea into a practical application. (MPEP §2106.04) Here, additional elements – a plurality of computing platforms, memory hardware storing; one or more electronic processors configured to execute the instructions; computing platforms (including a first, second) - are recited in the claims at a high degree of generality, and thus do not amount to a practical application since the claims are simply using each of these additional elements as tools to carry out the recited abstract idea (i.e., “apply it’). (See e.g., MPEP §2106.05(f)). Moreover, these additional elements simply perform generic computer data receipt, transmission and processing or analysis steps such as those functions typically performed by general purpose computer or a computing system. The recitation of a machine learning model itself is identified as an additional element beyond the abstract idea, as machine learning merely recites programming of a computer to perform a desired data analysis to implement the abstract idea. (MPEP 2106.05(f)). The claim recites additional limitations - extract historical features…extract features indicative of current user interactions…receiving historical behavioral biometric metadata…to build a historical profile and sending the control signal… (Note: As a definition is not provided in applicant specification, a control signal is interpreted as the data sent from one device to another) which merely recites receiving and sending data which are insignificant extra-solution activity, and therefore not indicative of a practical application. And other features further define the data utilized in the model to implement the abstract idea. (Step 2A Prong 2: No) Step 2B Under this step, it is determined whether the recited additional elements amount to something “significantly more’ than the recited abstract idea to which the claims are directed. (i.e., provide an inventive concept). (MPEP §2106.05) The recited additional elements, identified above in the Step 2A, Prong 2 analysis, do not amount to an inventive concept since, as stated above in the Step 2A, Prong 2 analysis, the additional elements are specified at a high level of generality and are merely being used as tools to carry out the abstract idea (i.e., “apply it’). When considered separately or in combination, the additional elements are not sufficient to amount to significantly more than the judicial exception. As stated in MPEP 2106.05(d), a factual determination is required to support a conclusion that an additional limitation (or combination of additional limitations) is well-understood, routine, conventional activity (Berkheimer). In view of this requirement, the limitations - receiving historical behavioral biometric metadata…to build a historical profile and sending the control signal…- do not amount to significantly more than the abstract idea, because the courts have found the concepts of receiving and sending data to be well-understood, routine, and conventional activity (See MPEP 2106.05(d): OIP Techs., Inc., buySAFE, Inc. ). (Step 2B: No) Independent claim 11 recites limitations similar to the limitations of independent claim 1; therefore, the limitations of claim 11 are directed to the same abstract idea identified in claim 1. A similar analysis as applied to claim 1 is applicable and accordingly, claim 11 is also rejected under 35 USC 101. Dependent claims 2-6 and 12-16 are rejected under 35 USC 101. Dependent claims further refine the abstract idea that is present in independent claims 1 and 11, from which they respectively depend, as follows: Claims 2-5 and 12-15 further describe data analysis steps taken using the model to implement the abstract idea. Claims 6 and 16 recite an additional limitation - retrieving the historical profile based on an identifier contained in the transactional request - which merely recites gathering data which is insignificant extra-solution activity, and therefore not indicative of a practical application or indicative of something significantly more. Claims 2-5 and 12-15 recite additional elements – machine learning – for which analysis presented in claim 1 is also applicable. These dependent claims 2-6 and 12-16 do not add any element or feature that provides an integration into a practical application or include any element or feature that is significantly more than the recited abstract idea (See MPEP §§2106.04, 2106.05). Thus, neither the independent claims nor the dependent claims, including consideration of all the limitations of each claim viewed both individually and in combination, add any additional element or recite any subject matter that provides an integration into a practical application or provides something significantly more than the recited abstract idea to which the claims are directed that results in the claims being directed to patent eligible subject matter. For the reasons set forth above, claims 1-6 and 11-16 are not patent-eligible under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-6, and 11-16 are rejected under 35 U.S.C. 103 as being unpatentable over Margolin et al. (U.S. 2021/0336952) in view of Aument et al. (U.S. 2023/0206233) in view of Perez et al. (U.S. 2015/0363785) further in view of the publication by Mack Craft (U.S. 2024/0281474). Note: method claims 11- 15 are presented first in order of analysis. Re claim 11: Margolin shows a computer-implemented method comprising: receiving historical behavioral biometric metadata from at least one of a plurality of computing platforms to build a historical profile, the at least one of the plurality of computing platforms including a dimensionality reduction model comprising a principal component analysis transform and being configured to: extract historical features indicative of historical user interactions, generate historical behavioral biometric metadata comprising a compact historical signature by providing the extracted historical features to the dimensionality reduction model to project the extracted historical features into a lower-dimensional principal component space, extract current features indicative of current user interactions, and generate first behavioral biometric metadata comprising a compact current signature by providing the extracted current features to the dimensionality reduction model to project the extracted current features into the lower-dimensional principal component space; (paras 39, 42, and 43 depict a training dataset generation module that is operable to generate a training dataset based on user interaction data 218 (fig 2, where this user interaction being stored is indicative of the information having been received) associated with various user accounts during prior user sessions and para 40 showing user interaction data 218 may include, for a user account, interaction data corresponding to sessions performed by multiple different devices (interpreted as computing platforms), which may be of different device types. Para 19 indicates interaction data may include cursor location, which may be provided as a function of time, user interface control commands, which may be provided as a function of time (e.g., zooming in or out on various portions of a webpage), user interface data commands, which may be provided as a function of time (e.g., cursor selections, characters entered via a keyboard, etc.) – where each interaction data is interpreted as behavioral biometric metadata and the set of gathered information is interpreted as the historical profile); training a machine learning model using the historical profile (para 39 – shows a training data set is generated based on user interaction data 218 associated with various user accounts 214 during prior (i.e, historical) user sessions); receiving a transaction request from a first computing platform, the computer system (e.g., a server system) may receive a request to authorize a transaction associated with a first user account, where the request includes transaction details for the transaction and, separate from those transaction details, interaction data that is indicative of the manner in which the requesting user interacted with their client device during a user session (e.g., the session during which the requesting user initiated the transaction) ). Margolin does not expressly show but Aument shows: providing the first behavioral biometric metadata (to the trained machine learning learning) (para 44 showing input touch data to trained machine learning model) and the historical behavioral biometric metadata to the trained machine learning model to generate a biometric match (para 44 - For instance, the authentication component 122 may derive one or more metrics from the touch data and may input these metrics as feature data into the behavioral model trained for the buyer 102. The model may output a score indicating a level of correspondence between the touch data 138 (interpreted as currently being provided) and the touch data on which the model has been trained (interpreted as historical data)); generating a control signal based on the biometric match (para 44 The authentication component 122 may then compare this score to a threshold and, if the score is less than the threshold, may output an indication (interpreted as control signal which is data being sent) that the authentication has been denied, such as the buyer interface 140. In addition, the authentication component 122 may send, to a merchant device or other device associated with a currently requested transaction, an indication of a potential fraud attempt. If, however, the score is greater than a threshold, then the authentication component 122 may authenticate the buyer 102 and output another buyer interface 142 indicating that the buyer 102 has been authenticated.); and sending the control signal to a second computing platform (para 44 In addition, the authentication component 122 may send, to a merchant device (interpreted as second computing platform) or other device associated with a currently requested transaction, an indication of a potential fraud attempt). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the process of Margolin that shows generating a model to evaluate user behavior information by Aument that shows an authentication process of a user requesting a transaction utilizing machine learning analysis of behavioral parameters. One of ordinary skill would have been motivated to make the modification to ensure a current user of a device is properly authenticated as being a valid user. Margolin in view of Aument shows the method of claim 11 but do not expressly show a dimensionality reduction model comprising a principal component analysis transform and being configured to: extract historical features indicative of historical user interactions, generate historical behavioral biometric metadata comprising a compact historical signature by providing the extracted historical features to the dimensionality reduction model to project the extracted historical features into a lower-dimensional principal component space, extract current features indicative of current user interactions, and generate first behavioral biometric metadata comprising a compact current signature by providing the extracted current features to the dimensionality reduction model to project the extracted current features into the lower-dimensional principal component space; or the historical features including (i) historical keystroke data, (ii) at least one historical non-keystroke modality selected from historical cursor trajectory data, historical scrolling behavior data, historical touch gesture data, and historical device-orientation change data, or (iii) a combination thereof or the current features including (i) current keystroke data, (ii) at least one current non-keystroke modality selected from current cursor trajectory data, current scrolling behavior data, current touch gesture data, and current device-orientation change data, or (iii) a combination thereof. Craft shows a dimensionality reduction model comprising a principal component analysis transform and being configured to: extract historical features indicative of historical user interactions, generate historical behavioral biometric metadata comprising a compact historical signature by providing the extracted historical features to the dimensionality reduction model to project the extracted historical features into a lower-dimensional principal component space, extract current features indicative of current user interactions, and generate first behavioral biometric metadata comprising a compact current signature by providing the extracted current features to the dimensionality reduction model to project the extracted current features into the lower-dimensional principal component space (paras 165-167 Extracted features are combined into a compact representation that serves as the fingerprint or signature of the photograph. This step aims to create a unique identifier that captures the essential characteristics of the image while being invariant to certain transformations. The fingerprint may be a hash value computed from the extracted features, a compact descriptor derived from feature vectors, or a combination of both. [0166] If a photograph or video frame includes a face, facial recognition may be applied to determine a person's identity from the digital image or video frames. The presence and location of faces within an image of a photograph or frame are detected by scanning the image and identify regions likely to contain faces using Haar cascades, convolutional neural networks (CNNs), and/or histogram of oriented gradients (HOG). Once faces are detected, they are aligned to a standardized pose or orientation to help reduce variations due to factors like head tilt, rotation, and scale. Alignment involves transforming the detected face regions to a canonical form, such as frontal view or upright position using landmark detection, affine transformations, and/or geometric normalization. After alignment, features are extracted from the face regions to create a compact representation that captures distinctive characteristics. Features may include various aspects of facial geometry, texture, and appearance such as Local Binary Patterns (LBP) which describe texture patterns in localized regions of the face, Histogram of Oriented Gradients (HOG) which represents local gradient orientations in the face image, Eigenfaces which uses principal component analysis (PCA) to extract eigenvalues representing facial features and/or Deep Learning which applies convolutional neural networks (CNNs) to learn hierarchical features from raw pixel data. Once features are extracted, they are encoded into a compact representation suitable for comparison. This step may apply normalization and dimensionality reduction techniques to enhance robustness and efficiency using vector quantization, hashing, and/or feature aggregation.) Perez shows wherein the historical and current features includes keystroke data (paras 60 Behavioral biometrics may be grouped into the type of behavior being captured and analyzed. Behavioral profile 616 includes behavioral biometric samples of keystroke dynamics which include, for example and without limitation, the total elapsed time taken for suspect consumer 622 to enter a personal identification number (PIN) code into, e.g., a point-of-sale device during a payment card transaction, or the time spacing between keystrokes on a keyboard or a tablet input device during, e.g., entering data into a data field during an online transaction, or the speed of using an interface. Keystroke dynamics may be collected from, for example, a point-of-sale device, or a desktop or laptop computer keyboard, or a mobile computing device's physical or virtual keyboard.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the process of Margolin that shows generating a model to evaluate user behavior information and Aument that shows an authentication process of a user requesting a transaction utilizing machine learning analysis of behavioral parameters by Perez that identifies behavioral biometrics associated with users and the biometric metadata and model of Craft. One of ordinary skill would have been motivated to make the modification to expand upon the various forms of user behavior data that can be used to authenticate said user. Re claim 12: Margolin in view of Aument shows the method of claim 11. Regarding the limitations: updating the historical profile using the first behavioral biometric metadata and retraining the trained machine learning model using the updated historical profile. Aument further shows using the newly received touch data to update the behavioral model (para 20 – where newly received touch data is interpreted as the first behavioral biometric metadata which is added to already existing data of a historical profile and update is interpreted as retraining the model). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have further modified the process of Margolin that shows generating a model to evaluate user behavior information and Aument that shows an authentication process of a user requesting a transaction utilizing machine learning analysis of behavioral parameters by Aument that shows updating a behavioral model. One of ordinary skill would have been motivated to make the modification to keep the model up to date and accurate. Re claim 13: Margolin further shows wherein providing the first behavioral biometric metadata and the historical behavioral biometric metadata to the trained machine learning model to generate the biometric match includes: providing the first behavioral biometric metadata to the trained machine learning model to generate a first output (paras 17 a computer system (e.g., a server system) may receive a request to authorize a transaction associated with a first user account, where the request includes transaction details for the transaction and, separate from those transaction details, interaction data that is indicative of the manner in which the requesting user interacted with their client device during a user session (e.g., the session during which the requesting user initiated the transaction) and para 21 the server system may apply a machine learning model to the interaction data to generate a first encoding value (interpreted as a first output) that is based on the manner in which the requesting user interacts with their client device during the first user session)); providing the historical profile to the trained machine learning model to generate a reference output (para 21 and 37 The server system may then compare this first encoding value to a reference encoding value (interpreted as reference output) that was generated based on prior interaction data indicative of the manner in which an authorized user of the first account interacted with the first client device during prior user sessions.) (NOTE: as the model was trained on historical profile, this limitation is interpreted as the initially trained model) computing a closeness of the first output and the reference output (para 21 Based on this comparison, the server system may generate an output value (also referred to as a “similarity score”) that indicates a similarity between the first encoding value and the reference encoding value). Re claim 14: Margolin in view Aument shows the method of claim 13. Margolin further shows wherein providing the first behavioral biometric metadata and the historical behavioral biometric metadata to the trained machine learning model to generate the biometric match includes: in response to the closeness not exceeding a threshold, generating a negative biometric match (para 21- If, however, the similarity score does not exceed the predetermined threshold value, this may indicate that the requesting user of the first client device is someone other than the authorized user. As such, the server system may then take one or more corrective actions, such as denying the transaction or initiating additional authentication operations. And para 37 – if the similarity score does not exceed a predetermined threshold value, authentication determination module may determine that the user’s interaction with client device during the current user session is inconsistent with the interaction behavior of the authorized user of user account during prior user sessions and, as such, the user may not be authenticated. And para 38 - authentication determination module may generate an authentication determination specifying the outcome of this determination). Re claim 15: Margolin in view of Aument shows the method of claim 14. Margolin further shows wherein providing the first behavioral biometric metadata and the historical behavioral biometric metadata to the trained machine learning model to generate the biometric match includes: in response to the closeness meeting or exceeding the threshold, generating a positive biometric match (paras 21 and 37 if the similarity score exceeds some predetermined threshold value (e.g., 0.40, 0.75, 0.80, 0.95, or some other value), authentication determination module may determine that the user interaction with client device during the current user session is sufficiently similar to the interaction behavior of the user of account during prior user sessions such that the requesting user may be authenticated and para 38 - authentication determination module may generate an authentication determination specifying the outcome of this determination). Re claim 16: Margolin in view of Aument shows the method of claim 11. Margolin further shows retrieving the historical profile based on an identifier contained in the transaction request (paras 31, 32 and fig 2 shows user trying to access account and that user account associated with user interaction data from prior user sessions (interpreted as historical profile)). Re claims 1-6: the limitations closely parallel the limitations of claims 11-16 and are therefore rejected under a similar rationale. Margolin shows memory hardware configured to store instructions and one or more electronic processors configured to execute the instructions (paras 52, 69-71). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE S KANG whose telephone number is (571)270-3611. The examiner can normally be reached on Monday through Friday between M-F 10am-2pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Gart may be reached at (571)-272-3955. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IRENE S KANG/Examiner, Art Unit 3696 1/24/2026 /MATTHEW S GART/Supervisory Patent Examiner, Art Unit 3696
Read full office action

Prosecution Timeline

Oct 19, 2023
Application Filed
Feb 09, 2025
Non-Final Rejection — §101, §103
Apr 04, 2025
Interview Requested
Apr 11, 2025
Applicant Interview (Telephonic)
Apr 11, 2025
Examiner Interview Summary
May 15, 2025
Response Filed
May 28, 2025
Final Rejection — §101, §103
Aug 11, 2025
Interview Requested
Aug 20, 2025
Applicant Interview (Telephonic)
Aug 20, 2025
Examiner Interview Summary
Sep 15, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §101, §103
Apr 09, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586131
METHOD AND SYSTEM FOR COVARIANCE MATRIX ESTIMATION
2y 5m to grant Granted Mar 24, 2026
Patent 12555162
EVENT TRIGGERED TRADING
2y 5m to grant Granted Feb 17, 2026
Patent 12469018
SPLIT ATM BOOTH AND METHOD OF PERFORMING BANKING TRANSACTIONS THEREIN
2y 5m to grant Granted Nov 11, 2025
Patent 12423701
TRANSACTION PROCESSING SYSTEM AND TRANSACTION PROCESSING METHOD
2y 5m to grant Granted Sep 23, 2025
Patent 12288203
Systems and Methods for an Electronic Wallet Payment Tool
2y 5m to grant Granted Apr 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
16%
Grant Probability
42%
With Interview (+26.0%)
6y 1m
Median Time to Grant
High
PTA Risk
Based on 224 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month