Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This action is responsive to: an original application filed on 10 March 2024.
2. Claims 1-20 are currently pending and claims 1, 11 and 20 are independent claims.
Information Disclosure Statement
3. No IDS filed.
Priority
4. Priority claimed date has been noted.
Drawings
5. The drawings filed on 10 March 2024 are accepted by the examiner.
Claim Rejections - 35 USC § 102
6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected 35 U.S.C §102 (a)(1) as being anticipated by Riffert et al. (US Publication No. 20210142335), hereinafter Riffert.
Regarding claim 1:
receiving, via an application server, a first dataset (Riffert, ¶65, ¶32), intelligently analyze user behavior and identify users that may intend to commit fraudulent behavior. Trained machine learning models can look for different fraud attributes within previous data associated with the user to generate scores. The scores are not necessarily a single user score, but a collection of multiple scores corresponding to how probable it is that a user will commit multiple different types of inappropriate behavior. Based on the different types of behavior that are identified as being associated with a user, the friction service
determining, via a trained machine learning model, a first friction level, wherein the trained machine learning model has been trained to predict a friction level based on at least one dataset (Riffert, ¶19, abstract) wherein, friction points to turn on/off may be determined dynamically based on a particular user and their scores. In this case, friction point(s) may be activated depending on which negative behavior attribute is identified by the machine learning algorithms. See also ¶34.
generating, via the application server, a first security device based on the first friction level (Riffert, ¶32, Fig.2A).
and causing to output, via a graphical user interface (“GUI”), the first security device (Riffert, ¶54, ¶3536).
Regarding claim 2:
further comprising: receiving, via the application server, a request for user authentication; and upon receiving the request for user authentication, requesting the first dataset from a data storage (Riffert, ¶25).
Regarding claim 3:
further comprising: receiving, via the application server, one or both of a first user input associated with the first security device or a second dataset; based on the first user input or the second dataset, determining, via the trained machine learning model, a second friction level; generating, via the application server, a second security device based on one or both of the first friction level or the second friction level; and causing to output, via a GUI, the second security device (Riffert, ¶26-27).
Regarding claim 4:
wherein generating the second security device based on one or both of the first friction level or the second friction level further comprises: determining, via the application server, the second friction level is higher than the first friction level; and generating, via the application server, the second security device such that the second security device has a higher security level than the first security device (Riffert, ¶49).
Regarding claim 5:
wherein generating the second security device based on one or both of the first friction level or the second friction level further comprises: determining, via the application server, the second friction level is lower than the first friction level; and generating, via the application server, the second security device such that the second security device has a lower security level than the first security device (Riffert, ¶20).
Regarding claim 6:
further comprising: receiving, via the application server, one or both of a second user input associated with the second security device or a third dataset; based on the second user input or the third dataset, determining, via the trained machine learning model, a third friction level; generating, via the application server, a third security device based on at least one of the first friction level, the second friction level, or the third friction level; and causing to output, via a GUI, the third security device (Riffert, ¶18, ¶27).
Regarding claim 7:
further comprising: receiving, via the application server, a third user input associated with the third security device; and based on the third user input, initiating at least one protective measure via an analysis system (Riffert, ¶17, ¶28).
Regarding claim 8:
wherein the security device includes at least one of a Completely Automated Public Turing test to tell Computers and Humans Apart (“CAPTCHA”), a toggle, a button, or a code verification element (Riffert, ¶35).
Regarding claim 9:
wherein the dataset includes at least one of at least one user input, user input data, an indication of digital extraction, screenshare activity, time on page, time to respond to security device, response to a security device, or media content HyperText Markup Language (“HTML”) manipulation (Riffert, ¶29, ¶22).
Regarding claim 10:
wherein the trained machine learning model has been trained to learn associations between training data to identify an output, the training data including a plurality of: at least one user input, user input data, an indication of digital extraction, screenshare activity, time on page, time to respond to security device, response to a security device, media content HTML manipulation, or responses to security devices (Riffert, ¶20, ¶49).
Regarding claim 11:
at least one memory storing instructions (Riffert, ¶60).
and at least one processor operatively connected to the memory (Riffert, ¶58), and configured to execute the instructions to perform operations for dynamically generating a friction-based security device, the operations including: receiving, via an application server, a first dataset (Riffert, ¶65, ¶32), intelligently analyze user behavior and identify users that may intend to commit fraudulent behavior. Trained machine learning models can look for different fraud attributes within previous data associated with the user to generate scores. The scores are not necessarily a single user score, but a collection of multiple scores corresponding to how probable it is that a user will commit multiple different types of inappropriate behavior. Based on the different types of behavior that are identified as being associated with a user, the friction service
determining, via a trained machine learning model, a first friction level, wherein the trained machine learning model has been trained to predict a friction level based on at least one dataset (Riffert, ¶19, abstract) wherein, friction points to turn on/off may be determined dynamically based on a particular user and their scores. In this case, friction point(s) may be activated depending on which negative behavior attribute is identified by the machine learning algorithms. See also ¶34.
generating, via the application server, a first security device based on the first friction level (Riffert, ¶32, Fig.2A).
and causing to output, via a graphical user interface (“GUI”), the first security device (Riffert, ¶54, ¶35-36).
Regarding claim 12:
the operations further comprising: receiving, via the application server, a request for user authentication; and upon receiving the request for user authentication, requesting the first dataset from a data storage (Riffert, ¶25).
Regarding claim 13:
the operations further comprising: receiving, via the application server, one or both of a first user input associated with the first security device or a second dataset; based on the first user input or the second dataset, determining, via the trained machine learning model, a second friction level; generating, via the application server, a second security device based on one or both of the first friction level or the second friction level; and causing to output, via a GUI, the second security device (Riffert, ¶26-27).
Regarding claim 14:
wherein generating the second security device based on one or both of the first friction level or the second friction level further comprises: determining, via the application server, the second friction level is higher than the first friction level; and generating, via the application server, the second security device such that the second security device has a higher security level than the first security device (Riffert, ¶49).
Regarding claim 15:
wherein generating the second security device based on one or both of the first friction level or the second friction level further comprises: determining, via the application server, the second friction level is lower than the first friction level; and generating, via the application server, the second security device such that the second security device has a lower security level than the first security device (Riffert, ¶20).
Regarding claim 16:
the operations further comprising: receiving, via the application server, one or both of a second user input associated with the second security device or a third dataset; based on the second user input or the third dataset, determining, via the trained machine learning model, a third friction level; generating, via the application server, a third security device based on at least one of the first friction level, the second friction level, or the third friction level; and causing to output, via a GUI, the third security device (Riffert, ¶18, ¶27).
Regarding claim 17:
the operations further comprising: receiving, via the application server, a third user input associated with the third security device; and based on the third user input, initiating at least one protective measure via an analysis system (Riffert, ¶17, ¶28).
Regarding claim 18:
wherein the security device includes at least one of a Completely Automated Public Turing test to tell Computers and Humans Apart (“CAPTCHA”), a toggle, a button, or a code verification element (Riffert, ¶35).
Regarding claim 19:
wherein the dataset includes at least one of at least one user input, user input data, an indication of digital extraction, screenshare activity, time on page, time to respond to security device, response to a security device, or media content HTML manipulation (Riffert, ¶22, ¶29).
Regarding claim 20:
receiving, via an application server, a request for user authentication; upon receiving the request for user authentication, requesting a first dataset from a data storage;
determining, via a trained machine learning model, a first friction level based on the first dataset (Riffert, ¶65, ¶32), intelligently analyze user behavior and identify users that may intend to commit fraudulent behavior. Trained machine learning models can look for different fraud attributes within previous data associated with the user to generate scores. The scores are not necessarily a single user score, but a collection of multiple scores corresponding to how probable it is that a user will commit multiple different types of inappropriate behavior. Based on the different types of behavior that are identified as being associated with a user, the friction service.
wherein the trained machine learning model has been trained to predict a friction level based on at least one dataset (Riffert, ¶19, abstract) wherein, friction points to turn on/off may be determined dynamically based on a particular user and their scores. In this case, friction point(s) may be activated depending on which negative behavior attribute is identified by the machine learning algorithms. See also ¶34.
the trained machine learning model having been trained to learn associations between training data to identify an output, the training data including a plurality of: (Riffert, ¶27-28, abstract) wherein, friction points to turn on/off may be determined dynamically based on a particular user and their scores. In this case, friction point(s) may be activated depending on which negative behavior attribute is identified by the machine learning algorithms. See also ¶34.
at least one user input, user input data, an indication of digital extraction, screenshare activity, time on page, time to respond to security device, response to a security device, media content HTML manipulation, or responses to security devices (Riffert, ¶24-25), wherein capture all forms of user activity on the site including, but not limited to, which device is being used (e.g., device identifier, etc.) an IP address of the device, standard cookies files from the device.
generating, via the application server, a first security device based on the first friction level; causing to output, via a GUI, the first security device (Riffert, ¶64, ¶34-35).
receiving, via the application server, one or both of a first user input associated with the first security device or a second dataset (Riffert, ¶20), wherein New data may be received from the host platform that hosts the online resource where the user is interacting.
based on the first user input or the second dataset, determining, via the trained machine learning model, a second friction level (Riffert, ¶29, ¶34).
generating, via the application server, a second security device based on one or both of the first friction level or the second friction level (Riffert, ¶32, Fig.2A).
and causing to output, via a GUI, the second security device (Riffert, ¶54, ¶35-36).
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Monjour Rahim whose telephone number is (571)270-3890.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (in USA or CANANDA) or 571-272-1000.
/Monjur Rahim/
Patent Examiner
United States Patent and Trademark Office
Art Unit: 2436; Phone: 571.270.3890
E-mail: monjur.rahim@uspto.gov
Fax: 571.270.4890