Prosecution Insights
Last updated: April 19, 2026
Application No. 18/905,202

SYSTEMS AND METHODS FOR DYNAMICALLY GENERATING A FRICTION-BASED SECURITY DEVICE

Non-Final OA §102
Filed
Oct 03, 2024
Examiner
RAHIM, MONJUR
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Capital One Services LLC
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
742 granted / 879 resolved
+26.4% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
26.6%
-13.4% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 879 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This action is responsive to: an original application filed on 10 March 2024. 2. Claims 1-20 are currently pending and claims 1, 11 and 20 are independent claims. Information Disclosure Statement 3. No IDS filed. Priority 4. Priority claimed date has been noted. Drawings 5. The drawings filed on 10 March 2024 are accepted by the examiner. Claim Rejections - 35 USC § 102 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected 35 U.S.C §102 (a)(1) as being anticipated by Riffert et al. (US Publication No. 20210142335), hereinafter Riffert. Regarding claim 1: receiving, via an application server, a first dataset (Riffert, ¶65, ¶32), intelligently analyze user behavior and identify users that may intend to commit fraudulent behavior. Trained machine learning models can look for different fraud attributes within previous data associated with the user to generate scores. The scores are not necessarily a single user score, but a collection of multiple scores corresponding to how probable it is that a user will commit multiple different types of inappropriate behavior. Based on the different types of behavior that are identified as being associated with a user, the friction service determining, via a trained machine learning model, a first friction level, wherein the trained machine learning model has been trained to predict a friction level based on at least one dataset (Riffert, ¶19, abstract) wherein, friction points to turn on/off may be determined dynamically based on a particular user and their scores. In this case, friction point(s) may be activated depending on which negative behavior attribute is identified by the machine learning algorithms. See also ¶34. generating, via the application server, a first security device based on the first friction level (Riffert, ¶32, Fig.2A). and causing to output, via a graphical user interface (“GUI”), the first security device (Riffert, ¶54, ¶3536). Regarding claim 2: further comprising: receiving, via the application server, a request for user authentication; and upon receiving the request for user authentication, requesting the first dataset from a data storage (Riffert, ¶25). Regarding claim 3: further comprising: receiving, via the application server, one or both of a first user input associated with the first security device or a second dataset; based on the first user input or the second dataset, determining, via the trained machine learning model, a second friction level; generating, via the application server, a second security device based on one or both of the first friction level or the second friction level; and causing to output, via a GUI, the second security device (Riffert, ¶26-27). Regarding claim 4: wherein generating the second security device based on one or both of the first friction level or the second friction level further comprises: determining, via the application server, the second friction level is higher than the first friction level; and generating, via the application server, the second security device such that the second security device has a higher security level than the first security device (Riffert, ¶49). Regarding claim 5: wherein generating the second security device based on one or both of the first friction level or the second friction level further comprises: determining, via the application server, the second friction level is lower than the first friction level; and generating, via the application server, the second security device such that the second security device has a lower security level than the first security device (Riffert, ¶20). Regarding claim 6: further comprising: receiving, via the application server, one or both of a second user input associated with the second security device or a third dataset; based on the second user input or the third dataset, determining, via the trained machine learning model, a third friction level; generating, via the application server, a third security device based on at least one of the first friction level, the second friction level, or the third friction level; and causing to output, via a GUI, the third security device (Riffert, ¶18, ¶27). Regarding claim 7: further comprising: receiving, via the application server, a third user input associated with the third security device; and based on the third user input, initiating at least one protective measure via an analysis system (Riffert, ¶17, ¶28). Regarding claim 8: wherein the security device includes at least one of a Completely Automated Public Turing test to tell Computers and Humans Apart (“CAPTCHA”), a toggle, a button, or a code verification element (Riffert, ¶35). Regarding claim 9: wherein the dataset includes at least one of at least one user input, user input data, an indication of digital extraction, screenshare activity, time on page, time to respond to security device, response to a security device, or media content HyperText Markup Language (“HTML”) manipulation (Riffert, ¶29, ¶22). Regarding claim 10: wherein the trained machine learning model has been trained to learn associations between training data to identify an output, the training data including a plurality of: at least one user input, user input data, an indication of digital extraction, screenshare activity, time on page, time to respond to security device, response to a security device, media content HTML manipulation, or responses to security devices (Riffert, ¶20, ¶49). Regarding claim 11: at least one memory storing instructions (Riffert, ¶60). and at least one processor operatively connected to the memory (Riffert, ¶58), and configured to execute the instructions to perform operations for dynamically generating a friction-based security device, the operations including: receiving, via an application server, a first dataset (Riffert, ¶65, ¶32), intelligently analyze user behavior and identify users that may intend to commit fraudulent behavior. Trained machine learning models can look for different fraud attributes within previous data associated with the user to generate scores. The scores are not necessarily a single user score, but a collection of multiple scores corresponding to how probable it is that a user will commit multiple different types of inappropriate behavior. Based on the different types of behavior that are identified as being associated with a user, the friction service determining, via a trained machine learning model, a first friction level, wherein the trained machine learning model has been trained to predict a friction level based on at least one dataset (Riffert, ¶19, abstract) wherein, friction points to turn on/off may be determined dynamically based on a particular user and their scores. In this case, friction point(s) may be activated depending on which negative behavior attribute is identified by the machine learning algorithms. See also ¶34. generating, via the application server, a first security device based on the first friction level (Riffert, ¶32, Fig.2A). and causing to output, via a graphical user interface (“GUI”), the first security device (Riffert, ¶54, ¶35-36). Regarding claim 12: the operations further comprising: receiving, via the application server, a request for user authentication; and upon receiving the request for user authentication, requesting the first dataset from a data storage (Riffert, ¶25). Regarding claim 13: the operations further comprising: receiving, via the application server, one or both of a first user input associated with the first security device or a second dataset; based on the first user input or the second dataset, determining, via the trained machine learning model, a second friction level; generating, via the application server, a second security device based on one or both of the first friction level or the second friction level; and causing to output, via a GUI, the second security device (Riffert, ¶26-27). Regarding claim 14: wherein generating the second security device based on one or both of the first friction level or the second friction level further comprises: determining, via the application server, the second friction level is higher than the first friction level; and generating, via the application server, the second security device such that the second security device has a higher security level than the first security device (Riffert, ¶49). Regarding claim 15: wherein generating the second security device based on one or both of the first friction level or the second friction level further comprises: determining, via the application server, the second friction level is lower than the first friction level; and generating, via the application server, the second security device such that the second security device has a lower security level than the first security device (Riffert, ¶20). Regarding claim 16: the operations further comprising: receiving, via the application server, one or both of a second user input associated with the second security device or a third dataset; based on the second user input or the third dataset, determining, via the trained machine learning model, a third friction level; generating, via the application server, a third security device based on at least one of the first friction level, the second friction level, or the third friction level; and causing to output, via a GUI, the third security device (Riffert, ¶18, ¶27). Regarding claim 17: the operations further comprising: receiving, via the application server, a third user input associated with the third security device; and based on the third user input, initiating at least one protective measure via an analysis system (Riffert, ¶17, ¶28). Regarding claim 18: wherein the security device includes at least one of a Completely Automated Public Turing test to tell Computers and Humans Apart (“CAPTCHA”), a toggle, a button, or a code verification element (Riffert, ¶35). Regarding claim 19: wherein the dataset includes at least one of at least one user input, user input data, an indication of digital extraction, screenshare activity, time on page, time to respond to security device, response to a security device, or media content HTML manipulation (Riffert, ¶22, ¶29). Regarding claim 20: receiving, via an application server, a request for user authentication; upon receiving the request for user authentication, requesting a first dataset from a data storage; determining, via a trained machine learning model, a first friction level based on the first dataset (Riffert, ¶65, ¶32), intelligently analyze user behavior and identify users that may intend to commit fraudulent behavior. Trained machine learning models can look for different fraud attributes within previous data associated with the user to generate scores. The scores are not necessarily a single user score, but a collection of multiple scores corresponding to how probable it is that a user will commit multiple different types of inappropriate behavior. Based on the different types of behavior that are identified as being associated with a user, the friction service. wherein the trained machine learning model has been trained to predict a friction level based on at least one dataset (Riffert, ¶19, abstract) wherein, friction points to turn on/off may be determined dynamically based on a particular user and their scores. In this case, friction point(s) may be activated depending on which negative behavior attribute is identified by the machine learning algorithms. See also ¶34. the trained machine learning model having been trained to learn associations between training data to identify an output, the training data including a plurality of: (Riffert, ¶27-28, abstract) wherein, friction points to turn on/off may be determined dynamically based on a particular user and their scores. In this case, friction point(s) may be activated depending on which negative behavior attribute is identified by the machine learning algorithms. See also ¶34. at least one user input, user input data, an indication of digital extraction, screenshare activity, time on page, time to respond to security device, response to a security device, media content HTML manipulation, or responses to security devices (Riffert, ¶24-25), wherein capture all forms of user activity on the site including, but not limited to, which device is being used (e.g., device identifier, etc.) an IP address of the device, standard cookies files from the device. generating, via the application server, a first security device based on the first friction level; causing to output, via a GUI, the first security device (Riffert, ¶64, ¶34-35). receiving, via the application server, one or both of a first user input associated with the first security device or a second dataset (Riffert, ¶20), wherein New data may be received from the host platform that hosts the online resource where the user is interacting. based on the first user input or the second dataset, determining, via the trained machine learning model, a second friction level (Riffert, ¶29, ¶34). generating, via the application server, a second security device based on one or both of the first friction level or the second friction level (Riffert, ¶32, Fig.2A). and causing to output, via a GUI, the second security device (Riffert, ¶54, ¶35-36). Conclusion 7. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Monjour Rahim whose telephone number is (571)270-3890. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (in USA or CANANDA) or 571-272-1000. /Monjur Rahim/ Patent Examiner United States Patent and Trademark Office Art Unit: 2436; Phone: 571.270.3890 E-mail: monjur.rahim@uspto.gov Fax: 571.270.4890
Read full office action

Prosecution Timeline

Oct 03, 2024
Application Filed
Dec 24, 2025
Non-Final Rejection — §102
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603913
SELECTING A TRANSMISSION PATH FOR COMMUNICATING DATA BASED ON A CLASSIFICATION FOR THE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12596807
UNIFIED EXTENSIBLE FIRMWARE INTERFACE (UEFI)-LEVEL PROCESSING OF OUT-OF-BAND COMMANDS IN HETEROGENEOUS COMPUTING PLATFORMS
2y 5m to grant Granted Apr 07, 2026
Patent 12598458
METHODS AND DEVICES FOR SECURE COMMUNICATION WITH AND OPERATION OF AN IMPLANT
2y 5m to grant Granted Apr 07, 2026
Patent 12580742
SECURE MEMORY SYSTEM PROGRAMMING FOR HOST DEVICE VERIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574214
DISTRIBUTION AND USE OF ENCRYPTION KEYS TO DIRECT COMMUNICATIONS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+16.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 879 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month