Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The instant application having Application No. 18/587,224 is presented for examination by the examiner. Claims 1, 8-9 and 15 are amended. Claims 1-20 have been examined.
Response to Arguments
Applicant’s arguments filed on 11/18/2025 have been fully considered but they are not persuasive.
Regarding applicant’s argument that Panasiuk does not describe the identity risk determination system as identifying anomalies, the examiner disagrees. Panasiuk teaches analyzing user activity, device behavior, and network signals to classify user interactions and determine a risk score associated with the interaction (Panasiuk: [0096]). Panasiuk explains that the system distinguishes fraudulent interactions and identifies anomaly signals such as “impossible travel,” which indicates that a user appears to be in two locations at once (Panasiuk: [0096]). Panasiuk further explains that the identity risk determination system facilitates real-time detection of unauthorized access or impersonation, thereby improving fraud detection for online systems (Panasiuk: [0097]). Additionally, Panasiuk teaches evaluating device identifiers and geolocation information to generate an identity risk score indicating suspicious authentications, such as logins from new devices or unfamiliar locations (Panasiuk: [0112]). Accordingly, Panasiuk teaches identifying anomalous activity associated with a user device, and applicant’s argument is therefore not persuasive.
Applicant further argues that Panasiuk fails to teach wherein the identifying the anomalous connection is further based on identifying that a second user device requested a connection to the network based on second user information corresponding to the same user profile. The examiner disagrees. Panasiuk teaches that the identity risk determination system evaluates user interactions using multiple behavioral and contextual signals associated with a user identity, including device identifiers, IP addresses, geographic locations, and network characteristics (Panasiuk: [0109]-[0112]). These signals are compared against previously observed activity associated with the same user identity in order to determine whether the current interaction deviates from expected behavior (Panasiuk: [0108], [0112]). Panasiuk further teaches detecting anomalous activity signals such as impossible travel, account cycling, and device cycling when evaluating user interactions (Panasiuk: [0139]). For example, Panasiuk explains that an “impossible travel” signal may indicate that a user appears to be accessing a system from two different locations at once, indicating suspicious activity associated with the same user identity (Panasiuk: [0096]). Detecting such signals necessarily involves identifying multiple access attempts associated with the same user identity or profile across different devices or access contexts. Thus, Panasiuk teaches identifying anomalous activity associated with a user account based on interactions originating from multiple devices or locations corresponding to the same user identity and applicant’s argument is therefore not persuasive.
Applicant further argues that Panasiuk does not disclose update, by inputting one or more results of the one or more cyberthreat remediation actions into the anomaly detection model, the anomaly detection model. The examiner disagrees. Panasiuk teaches that the identity risk determination system incorporates feedback derived from detected suspicious activity and user responses to improve future threat detection. Panasiuk explains that the system may request user input to verify whether a transaction or interaction was legitimate and that such user responses may become part of a training dataset used to improve threat recognition over time (Panasiuk: [0096], [0103]). Panasiuk further teaches that fraud feedback generated by system actions and user responses is incorporated into the system’s identity graph and used for training machine learning models within the model registry (Panasiuk: [0215], [0218]). The model registry receives training data based on fraud feedback and observed system activity, which is used to improve the system’s future risk assessments (Panasiuk: [0215]). These teachings correspond to updating the machine learning model using results and feedback associated with security events and remediation actions. Accordingly, Panasiuk teaches updating the system’s anomaly detection or risk assessment model using feedback derived from detected security events and user responses and applicant’s argument is therefore not persuasive.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness
rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 9-13 and 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Panasiuk (US 2024/0195828 A1) and in view of Manthiramoorthy (US 2024/0179168 A1).
Regarding Claim 1
Panasiuk discloses:
A computing platform comprising: at least one processor; a communication interface communicatively coupled to the at least one processor; and memory storing computer-readable instructions that, when executed by the at least one processor, configure the computing platform to:
generate, based on registration information of users, a plurality of user profiles, wherein a given user profile comprises user information for a corresponding user (Panasiuk ¶107-110: The system builds interaction profiles based on user event data, including user registration details (e.g., email, IP address, device IDs) used to evaluate user trustworthiness and risks.);
identify, based on monitoring a network, one or more user devices requesting a connection to the network (Panasiuk ¶111-112: The system receives an API query during user authentication that includes user data such as email, IP address, and device ID. A high identity score risk score can be reported when the user authenticates from a new device or an unknown location.);
train, based on the plurality of user profiles, an anomaly detection model, wherein training the anomaly detection model configures the anomaly detection model to identify anomalous connections and generate cyberthreat scores for connections based on input of user information (Panasiuk ¶95-97: The system uses a plurality of user profiles, including data on user activities and interactions, to train an anomaly detection model. This model is then configured to identify anomalous connections and generate cyberthreat scores based on user data, such as IP addresses, devices, and behaviors, to evaluate potential threats.);
identify, based on inputting user information associated with a user device, of the one or more user devices, into the anomaly detection model and based on a preliminary comparison of the user information to a user profile, of the plurality of user profiles and corresponding to a user of the user device (Panasiuk ¶96, 108-112: teaches inputting user information associated with a user device into a machine learned risk evaluation model and comparing that information to a corresponding user interaction profile. Panasiuk receives API requests including device identifiers, IP address, and user data and describes how it processes the inputs via a computational scoring engine using learned interaction profiles and identifies anomalies based on deviations from the user’s established profile.), an anomalous connection associated with the user device wherein the identifying the anomalous connection is further based on identifying that a second user device requested a connection to the network based on second user information corresponding to the same user profile (Panasiuk ¶96 and 139: teaches identifying a suspicious/anomalous interaction based on device and network behavior signals including “impossible travel” which is when the same user is observed from different locations. Panasiuk also teaches device cycling and irregularities such as known user authenticating from a previously unused device, thereby identifying an anomalous connection based on detection that another device associated with the same user identity/profile is involved.);
generate, based on the user information and using the anomaly detection model, a cyberthreat score for the anomalous connection (Panasiuk ¶96 and 112-113: The system builds user interaction profiles using registration details and event data to assess risk and detect anomalies such as new devices or locations. It generates identity risk scores by evaluating factors like device behavior, network activity, and user input, while dynamically updating models for improved fraud detection.);
identify, by comparing the cyberthreat score to a threshold score, whether the cyberthreat score satisfies the threshold score, wherein the threshold score indicates the anomalous connection is a cyberthreat if the cyberthreat score satisfies the threshold score; initiate, based on identifying that the cyberthreat score satisfies the threshold score, one or more cyberthreat remediation actions for the anomalous connection (Panasiuk ¶118-121: teaches generating a risk score based on user interaction data and comparing it to a threshold value. If the score meets the threshold, it triggers an alert, classifying the interaction as a potential cyberthreat. The system processes user responses to refine the risk assessment and generates an API response with the updated risk information for further action.); and
update, by inputting one or more results of the one or more cyberthreat remediation actions into the anomaly detection model, the anomaly detection model (Panasiuk ¶136, 215-219: teaches updating a machine learning anomaly/risk model by inputting results of a cyberthreat remediation actions into the model. Panasiuk further discloses that remediation actions such as alerts and security actions generate fraud feedback which is then entered into the identity graph and that the ML models registry receives model training based on changed to the identity graph from fraud feedback to update the models used for future anomaly/risk determination.).
Panasiuk discloses identifying anomalous network activities based on comparisons between observed user/device behavior and stored profiles, generating risk determination, and initiating remediation actions in response to detected anomalies. However, Panasiuk does not teach a remediation action which includes partitioning a user device to provide limited network access. Manthiramoorthy teaches initiating network partitioning actions in response to identified anomalous network access behavior. Manthiramoorthy discloses detecting anomalies by comparing fingerprints information associated with a client device requesting network access against stored fingerprinting information for an authorized device and upon determining an anomaly, executing an access policy that quarantines the client device to a quarantine VLAN or a less-privileged VLAN to restrict the device’s network access (¶40-41).
It would have been obvious to one of ordinary skill in the art at the time of the invention to incorporate Manthiramoorthy’s network partitioning and limited access enforcement technique into Panasiuk’s anomaly based identity risk determination and remediation framework in order to more effectively contain and mitigate detected anomalous connections by allowing restricted network access rather than complete denial, yielding in predictable results consistent with known network access control and cybersecurity practices.
Regarding Claim 2
Panasiuk discloses:
The computing platform of claim 1, wherein generating the cyberthreat score comprises, with the anomaly detection model:
comparing the user information associated with the user device to the user profile corresponding to the user; identifying, based on the comparing, one or more shared characteristics between the user information and the user profile corresponding to the user; and generating, based on the identifying the one or more shared characteristics, a cyberthreat score representing a likelihood of the anomalous connection being initiated by a cyberthreat actor (Panasiuk ¶56, 64, 96, 136: The system compares user information (e.g., IP address, device information, and user activity) to a user profile, identifying shared characteristics that help classify whether an interaction is trusted or suspicious. Machine learning models apply these comparisons to generate a risk score (cyberthreat score) based on shared attributes, indicating the likelihood of cyberthreat activity. The system leverages fraud feedback and dynamically adjusts based on the risk signals, refining its models to better detect and address anomalous connections and potential cyberthreats.).
Regarding Claim 3
Panasiuk discloses:
The computing platform of claim 2, wherein the generating the cyberthreat score based on the identifying the one or more shared characteristics comprises: generating, based on the one or more shared characteristics, an initial cyberthreat score; applying, to the one or more shared characteristics, one or more weighting values; and updating, based on the one or more weighting values, the initial cyberthreat score (Panasiuk ¶066 and ¶112: The system calculates a risk score using user attributes like device, geolocation, and activity data, applying weightings to these factors to assess risk. It generates and updates the risk score based on these characteristics, dynamically modifying API responses on a per-client basis to reflect the assessed cyberthreat likelihood.).
Regarding Claim 4
Panasiuk discloses:
The computing platform of claim 1, wherein the instructions, when executed by the at least one processor, further configure the computing platform to: update, based on identifying whether the cyberthreat score satisfies the threshold score and based on user information associated with the anomalous connection, the user profile corresponding to the user (Panasiuk ¶96 and 112-113: The system uses device and geolocation information to calculate an identity risk score based on user behavior, such as device use, network activity, and authentication attempts. It dynamically updates the risk models and user profiles by incorporating new data, including suspicious activities and user interactions, to enhance fraud detection and improve the accuracy of future evaluations.).
Regarding Claim 5
Panasiuk discloses:
The computing platform of claim 1, wherein the one or more cyberthreat remediation actions comprise one or more of: causing a password reset, disrupting the anomalous connection, adding the user device to a watchlist of known cyberthreats, implementing additional authentication requirements for a user profile, of the plurality of user profiles, associated with the user device, or causing output of a cyberthreat review notification (Panasiuk ¶136 and ¶215: The system generates and sends alerts or security actions in response to detected anomalies, such as fraud or suspicious behavior. These actions include notifying users or clients about cyberthreats and updating risk models and profiles to improve future fraud detection and prevent further threats, including outputting a cyberthreat review notification.).
Regarding Claim 6
Panasiuk discloses:
The computing platform of claim 1, wherein the one or more cyberthreat remediation actions comprise: incrementing a cyberthreat counter associated with the user device; identifying, based on incrementing the cyberthreat counter and by comparing the cyberthreat counter to a threshold counter, whether the cyberthreat counter meets or exceeds the threshold counter; and outputting, based on identifying that the cyberthreat counter meets or exceeds the threshold counter, an indication that the user device is associated with a cyberthreat actor (Panasiuk ¶112-113: The system evaluates user interactions and updates risk models based on user profile data, behavior, device information, and network activity. It uses this data to generate dynamic identity risk scores, which identify suspicious activity (like new devices or login locations) and determine the likelihood of a cyberthreat. The system dynamically adjusts scores and outputs relevant risk signals, assisting in fraud detection and threat mitigation.).
Regarding Claim 9
Claim 9 is directed to a method corresponding to the computer-implemented system in claim 1. Claim 9 is similar in scope to claim 1 and is therefore rejected under similar rationale.
Regarding Claim 10
Claim 10 is directed to a method corresponding to the computer-implemented system in claim 2. Claim 10 is similar in scope to claim 2 and is therefore rejected under similar rationale.
Regarding Claim 11
Claim 11 is directed to a method corresponding to the computer-implemented system in claim 3. Claim 11 is similar in scope to claim 3 and is therefore rejected under similar rationale.
Regarding Claim 12
Claim 12 is directed to a method corresponding to the computer-implemented system in claim 5. Claim 12 is similar in scope to claim 5 and is therefore rejected under similar rationale.
Regarding Claim 13
Claim 13 is directed to a method corresponding to the computer-implemented system in claim 6. Claim 13 is similar in scope to claim 6 and is therefore rejected under similar rationale.
Regarding Claim 15
Claim 15 is directed to a computer-readable media storing instructions corresponding to the computer-implemented system in claim 1. Claim 15 is similar in scope to claim 1 and is therefore rejected under similar rationale.
Regarding Claim 16
Claim 16 is directed to a computer-readable media storing instructions corresponding to the computer-implemented system in claim 2. Claim 16 is similar in scope to claim 2 and is therefore rejected under similar rationale.
Regarding Claim 17
Claim 17 is directed to a computer-readable media storing instructions corresponding to the computer-implemented system in claim 3. Claim 17 is similar in scope to claim 3 and is therefore rejected under similar rationale.
Regarding Claim 18
Claim 18 is directed to a computer-readable media storing instructions corresponding to the computer-implemented system in claim 5. Claim 18 is similar in scope to claim 5 and is therefore rejected under similar rationale.
Regarding Claim 19
Claim 19 is directed to a computer-readable media storing instructions corresponding to the computer-implemented system in claim 6. Claim 19 is similar in scope to claim 6 and is therefore rejected under similar rationale.
Claims 7, 14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Panasiuk (US 2024/0195828 A1), in view of Manthiramoorthy (US 2024/0179168 A1) as applied to claim 1 above, and in further view of Islam (US 20250112950 A1).
Regarding Claim 7
Panasiuk and Manthiramoorthy teaches building and updating user interaction profiles from registration details, device and network behavior, and event data to detect anomalies, generate and compare cyberthreat risk scores to thresholds, trigger remediation actions for potential threats, and use feedback from these actions to refine and update the anomaly detection model for improved threat detection. However, they do not disclose the following limitation “wherein the instructions, when executed by the at least one processor, further configure the computing platform to: compare the cyberthreat score to a second threshold score, wherein the second threshold score exceeds the threshold score; identify, based on the comparing, whether the cyberthreat score meets or exceeds the second threshold score; and increase, based on identifying that the cyberthreat score meets or exceeds the second threshold score, a frequency of authentication requests based on a user profile, of the plurality of user profiles, associated with the user device”
However, in an analogous art, Islam discloses a risk action system/method that includes:
The computing platform of claim 1, wherein the instructions, when executed by the at least one processor, further configure the computing platform to: compare the cyberthreat score to a second threshold score, wherein the second threshold score exceeds the threshold score (Islam ¶91-92 and 99-101: discloses computing a risk score for an authentication or in-session request and comparing it to a threshold to determine authentication actions. The second risk score for in-session requests can be argued to represent a comparison against a higher (second) threshold, indicating more stringent security criteria.); identify, based on the comparing, whether the cyberthreat score meets or exceeds the second threshold score (Islam ¶101: The system identifies whether the second risk score meets/exceeds its applicable threshold before triggering additional MFA authentication.); and increase, based on identifying that the cyberthreat score meets or exceeds the second threshold score, a frequency of authentication requests based on a user profile, of the plurality of user profiles, associated with the user device (Islam ¶91-92 and 99-101: teaches triggering additional MFA challenges during a session when a second risk score for in-session activity meets a threshold that may differ from the initial threshold, thereby increasing the frequency of authentication requests. The risk evaluation is based on a stored pattern of multiple historical attributes tied to the user account and device, which serves as the user profile associated with the user device.).
Given the teaching of Islam, a person having ordinary skill in the art before the effective filing date would have recognized the desirability of modifying the teachings of Panasiuk and Manthiramoorthy by increasing authentication frequency when a higher security threshold is met based on a user profile. Islam teaches that the system determines a second risk score for in-session requests from attributes tied to the user account and device and compares it to a threshold that may differ from the initial threshold and triggers MFA when the score meets or exceeds that threshold thereby increasing authentication requests based on the user profile (Islam ¶91-92 and 99-101).
Regarding Claim 14
Claim 14 is directed to a method corresponding to the computer-implemented system in claim 7. Claim 14 is similar in scope to claim 7 and is therefore rejected under similar rationale.
Regarding Claim 20
Claim 20 is directed to a computer-readable media storing instructions corresponding to the computer-implemented system in claim 7. Claim 20 is similar in scope to claim 7 and is therefore rejected under similar rationale.
Claims 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Panasiuk (US 2024/0195828 A1), in view of Manthiramoorthy (US 2024/0179168 A1) as applied to claim 1 above, and in further view of Deutschmann (US 2020/0336308 A1).
Regarding Claim 8
Panasiuk discloses:
The computing platform of claim 1, wherein the instructions, when executed by the at least one processor, further configure the computing platform to initiate the one or more cyberthreat remediation actions by: initiating, for the user device associated with the anomalous connection, the one or more cyberthreat remediation actions (Panasiuk ¶136, 215, 217: describes a security feedback loop that generates remediation actions, such as alerts and security actions, in response to detected anomalies and fraud. It uses fraud feedback and real-time risk factors to update and refine the system’s models, leading to updated remediation responses. These actions, which can include system alerts or custom security measures, help address identified threats and anomalous user behavior.);
Panasiuk and Manthiramoorthy teaches building and updating user interaction profiles from registration details, device and network behavior, and event data to detect anomalies, generate and compare cyberthreat risk scores to thresholds, trigger remediation actions for potential threats, and use feedback from these actions to refine and update the anomaly detection model for improved threat detection. However, they do not disclose the following limitation “maintaining, uninterrupted, a connection for a second user device, associated with a verified connection and with a user profile corresponding to the anomalous connection”
However, in an analogous art, Deutschmann discloses an anomalous action system/method that includes:
and maintaining, uninterrupted, a connection for a second user device, associated with a verified connection and with a user profile corresponding to the anomalous connection (Deutschmann ¶49-50 and 61–66: describes maintaining an authenticated secure data network connection when a behavioral match is within a preset threshold, even if other authentication factors fail. Behavioral patterns are stored and compared against a user profile, enabling the verified connection to be tied to the same user profile as the anomalous connection. This allows a second user device associated with the verified connection to remain connected without interruption.)).
Given the teaching of Deutschmann, it would have been obvious to modify the teaching of Panasiuk and Manthiramoorthy by remediating an anomalous connection while maintaining a verified connection tied to the same user profile. Deutschmann teaches that a system maintains an authenticated session when behavioral data meets a threshold and stores behavioral patterns in a user profile while terminating or restricting connections failing the threshold, thereby supporting uninterrupted access for a verified connection associated with the same profile (Deutschmann ¶49-50 and 61–66).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAAD A ABDULLAH whose telephone number is (571) 272-1531. The examiner can normally be reached on Monday - Friday, 8:30am - 5:00pm, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached on (571) 272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAAD AHMAD ABDULLAH/Examiner, Art Unit 2431
/SHIN-HON (ERIC) CHEN/Primary Examiner, Art Unit 2431