DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This initial written action is responding to the communication dated on 08/02/2024.
Claims 1-20 are submitted for examination.
Claims 1-20 are pending.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Priority
This application filed on August 02, 2024 claims priority of Parent application 17/589,469 filed on January 31, 2022.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to
www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7, 9-1117, 18 and 20 of U.S. Patent No. 12,095,792. Although the claims at issue are not identical, they are not patentably distinct from each other. Please refer to comparison table.
Instant Application 18/792,870
US PAT. # US 12,095,792 (App. # 17/589,469)
MULTI-VARIATE ANOMALOUS ACCESS DETECTION
MULTI-VARIATE ANOMALOUS ACCESS DETECTION
1
A computer system comprising: one or more hardware processors; and one or more hardware storage devices that store instructions that are executable by the one or more hardware processors to cause the computer system to: access a request for a resource; identify a plurality of features associated with the request; identify a plurality of scopes of the resource; form a set of combinations by associating each one of the features with each one of the scopes such that said each feature is included in multiple different combinations, with each one of those different combinations having a different scope; form combination groupings by grouping the different combinations in the set using each of the features as a basis; for each one of the combination groupings, dedicate a corresponding machine learning model to said each one combination group such that multiple machine learning models are dedicated, wherein each respective multiple machine learning model is tasked with performing anomaly detection for its respective combination grouping; cause the multiple machine learning models to perform anomaly detection on their respective combination groupings; and output one or more results from the multiple machine learning models.
1
A computing system that detects information relevant to a potential access anomaly of an access request, said computing system comprising: one or more hardware processors; and one or more computer-readable media having thereon computer-executable instructions that are executable by the one or more hardware processors to cause the computing system to: detect receipt of an access request; identify a plurality of features of an access pattern of the access request; identify a plurality of scopes of a resource requested to be accessed by the access request; form a grid of combinations by combining each feature from the plurality of features with each scope from the plurality of scopes such that said each feature is included in multiple different combinations, with each of those combinations having a different scope; form combination groupings by grouping the combinations in the grid using each feature in the plurality of features as a basis for forming the combination groupings; for each combination group in the combination groupings, cause a corresponding machine learning model to be dedicated to said each combination group such that multiple machine learning models are dedicated to the grid of combinations and such that each feature in the plurality of features is associated with a corresponding dedicated machine learning model, wherein the multiple machine learning models are tasked with performing anomaly detection for their respective combination grouping; cause the multiple machine learning models to perform anomaly detection on their respective combination groupings; and output a result of the anomaly detection.
2
The computer system of claim 1, wherein the set of combinations forms a grid of combinations.
1
….form a grid of combinations by combining each feature from the plurality of features with each scope from the plurality of scopes..
3
The computer system of claim 1, wherein each respective multiple machine learning model is dedicated to its respective combination grouping.
1
...a corresponding machine learning model to be dedicated to said each combination group..
4
The computer system of claim 1, wherein at least one of the plurality of features is a source Internet Protocol address of the request.
7
The computing system of claim 6, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
5
The computer system of claim 1, wherein the plurality of features includes a geographical location of a source of the request.
7
The computing system of claim 6, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
6
The computer system of claim 1, wherein the plurality of features includes a username associated with the request.
7
The computing system of claim 6, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
7
The computer system of claim 1, wherein the plurality of features includes a requesting application associated with the request.
7
The computing system of claim 6, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
8
The computer system of claim 1, wherein the plurality of features includes a security credential associated with the request.
7
The computing system of claim 6, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
9
The computer system of claim 1, wherein the anomaly detection is performed using an unsupervised machine learning model.
9
The computing system in accordance with claim 1, the anomaly detection performed using an unsupervised machine learning model.
10
The computer system of claim 1, wherein the anomaly detection is performed using a semi-supervised machine learning model.
10
The computing system in accordance with claim 1, the anomaly detection performed using a semi-supervised machine learning model.
11
A method comprising: accessing a request for a resource; identifying a plurality of features associated with the request; identifying a plurality of scopes of the resource; forming a set of combinations by associating each one of the features with each one of the scopes such that said each feature is included in multiple different combinations, with each one of those different combinations having a different scope; forming combination groupings by grouping the different combinations in the set using each of the features as a basis; for each one of the combination groupings, dedicating a corresponding machine learning model to said each one combination group such that multiple machine learning models are dedicated, wherein each respective multiple machine learning model is tasked with performing anomaly detection for its respective combination grouping; causing the multiple machine learning models to perform anomaly detection on their respective combination groupings; and outputting one or more results from the multiple machine learning models.
11
A computer-implemented method for detecting information relevant to a potential access anomaly of an access request, the method comprising: detecting receipt of an access request; identifying a plurality of features of an access pattern of the access request; identifying a plurality of scopes of a resource requested to be accessed by the access request; forming a grid of combinations by combining each feature from the plurality of features with each scope from the plurality of scopes such that said each feature is included in multiple different combinations, with each of those combinations having a different scope; forming combination groupings by grouping the combinations in the grid using each feature in the plurality of features as a basis for forming the combination groupings; for each combination group in the combination groupings, causing a corresponding machine learning model to be dedicated to said each combination group such that multiple machine learning models are dedicated to the grid of combinations and such that each feature in the plurality of features is associated with a corresponding dedicated machine learning model, wherein the multiple machine learning models are tasked with performing anomaly detection for their respective combination grouping; causing the multiple machine learning models to perform anomaly detection on their respective combination groupings; and outputting a result of the anomaly detection.
12
The method of claim 11, wherein the set of combinations forms a grid of combinations.
11
..forming a grid of combinations by combining each feature from the plurality of features with each scope from the plurality of scopes…
13
The method of claim 11, wherein each respective multiple machine learning model is dedicated to its respective combination grouping.
11
... causing a corresponding machine learning model to be dedicated to said each combination group..
14
The method of claim 11, wherein at least one of the plurality of features is a source Internet Protocol address of the request.
17
The method of claim 15, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
15
The method of claim 11, wherein the plurality of features includes a geographical location of a source of the request.
17
The method of claim 15, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
16
The method of claim 11, wherein the plurality of features includes a username associated with the request.
17
The method of claim 15, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
17
The method of claim 11, wherein the plurality of features includes a requesting application associated with the request.
17
The method of claim 15, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
18
The method of claim 11, wherein the plurality of features includes a security credential associated with the request.
17
The method of claim 15, wherein the plurality of features include a plurality of the following: a source Internet Protocol address of the access request; a geographical location of a source of the access request; a username associated with the access request; a requesting application associated with the access request; and a security credential associated with the access request.
19
The method of claim 11, wherein the anomaly detection is performed using an unsupervised machine learning model.
18
The method in accordance with claim 11, the anomaly detection performed using an unsupervised machine learning model.
20
One or more hardware storage devices that store instructions that are executable by one or more hardware processors to cause the one or more hardware processors to: access a request for a resource; identify a plurality of features associated with the request; identify a plurality of scopes of the resource; form a set of combinations by associating each one of the features with each one of the scopes such that said each feature is included in multiple different combinations, with each one of those different combinations having a different scope; form combination groupings by grouping the different combinations in the set using each of the features as a basis; for each one of the combination groupings, dedicate a corresponding machine learning model to said each one combination group such that multiple machine learning models are dedicated, wherein each respective multiple machine learning model is tasked with performing anomaly detection for its respective combination grouping; cause the multiple machine learning models to perform anomaly detection on their respective combination groupings; and output one or more results from the multiple machine learning models.
20
One or more computer-readable hardware storage media having thereon computer-executable instructions that are executable by one or more processors of a computing system to cause the computing system to: detect receipt of an access request; identify a plurality of features of an access pattern of the access request; identify a plurality of scopes of a resource requested to be accessed by the access request; form a grid of combinations by combining each feature from the plurality of features with each scope from the plurality of scopes such that said each feature is included in multiple different combinations, with each of those combinations having a different scope; form combination groupings by grouping the combinations in the grid using each feature in the plurality of features as a basis for forming the combination groupings; for each combination group in the combination groupings, cause a corresponding machine learning model to be dedicated to said each combination group such that multiple machine learning models are dedicated to the grid of combinations and such that each feature in the plurality of features is associated with a corresponding dedicated machine learning model, wherein the multiple machine learning models are tasked with performing anomaly detection for their respective combination grouping; cause the multiple machine learning models to perform anomaly detection on their respective combination groupings; and output a result of the anomaly detection.
Claims 1-20 – Objected
Claims 1-20 are objected to as being allowable if the Double Patenting Rejection is overcome. The following is an examiner’s analysis of the pertinent prior-art references.
Johnson et al. (US PGPUB. # US 2020/0280573) discloses, in FIG. 4 a block diagram of one embodiment of a computer-readable medium 400 is shown. This computer-readable medium may store instructions corresponding to the operations of FIG. 3 and/or any techniques described herein. Thus, in one embodiment, instructions corresponding to EDCS 160 may be stored on computer-readable medium 400. In operation 310, EDCS 160 receives a particular access indication of a particular access attempt to an electronic resource by a particular user, according to various embodiments. The access indication may be in the form of data sent to EDCS (e.g. data originating at data sources 202 and possibly processed via ingestion queue 204 and normalization module 206). The access attempt in operation 310 as well as the electronic resource can be a wide range of actions and resources. An electronic resource in this context may refer to a particular hardware and/or software component, and can include various physical devices having an electronic component. Thus, the electronic resource may include a particular server system and/or a service running on that system (e.g. a web server such as APACHE™, a remote login service such as secure shell (SSH)). The electronic resource can include an application running on top of another service, for example, a web application enabling payment of currency that is running on top of a web server. An electronic resource may also include a firewall, proxy server, network traffic balancer, workstation, network switch, etc. (Fig. 3(310), ¶35-¶36). The access attempt itself may also take many forms and will frequently correspond to the type of electronic resource for which access is being attempted. In the case of a web application, the access attempt may include an initial log-on, where a user provides an account identifier (e.g. user name) and one or more authorization credentials (e.g. a password, biometric, text message one-time code, etc.). The access attempt may also include accessing particular resources within an application (e.g. phone app, web app, or other), such as attempting to transfer funds after an initial authentication, change a mailing address associated with an account, change a password, add a funding instrument (e.g. new debit card), etc. Thus, an access attempt may be an attempt to use particular functionality of various types of software. The access attempt may be associated with particular authentication credentials (e.g. a username or a phone number for a particular account or other identifier), and depending on the access attempt, may not have an associated particular identifier (e.g. an anonymous access). Access attempts can include accessing particular files, either locally on a system or remotely via a network (e.g. file server and/or cloud storage). Access attempts can include accessing database storage (e.g. executing a query against a relational database which may read and/or modify data in the database). Changing file permissions or other file attributes (e.g. changing a name) can also be classified as an access attempt. Copying or executing a file, locally or remotely, may also be an access attempt. There are many different types of actions that may be considered an access attempt. All such access attempts may be logged by one or more systems (e.g. by a local operating system, intrusion detection system, firewall, network storage device, etc.). Both successful and unsuccessful attempts may be logged, which can be in real-time and/or in batch records. All such data may be forwarded to EDCS 160 for analysis. (¶38-¶40). In operation 320, EDCS 160 accesses user behavior model 230 and system access model 240 responsive to a particular access indication, according to various embodiments (although in some embodiments EDCS 160 may access only one of these models). Accessing these models is a precursor to using them in following steps to make determinations regarding an access attempt, according to various embodiments. (Fig. 3(320), ¶41). System access model 240 is based on access records and system characteristic data for one or more particular electronic resources, according to various embodiments. Access records for a particular system may include any access attempts made relative to resources controlled by that system. File access, application access, network ports accessed (e.g. packets sent to a particular TCP/IP port, UDP port, or other port on a system) may all be logged. From such data, system profiles can be built. (¶44). System access records may thus relate to various actions occurring relative to particular devices—interactions with them via a network and/or remote applications, as well as locally occurring actions. These actions may in some cases also be associated with a particular user (e.g., a system might have a record that a particular userID logged into that system. But in other cases, system access records may not be tied to any specifically identifiable individual. As an example, a service account (not tied to a specific human individual) for a web application X accesses a database Table 3 on an hourly basis from one of six servers, and usually performs a query and update on 10 or less rows on that database table. However, regardless of whether it was a service account for web application X or not, a mass update on 1000 rows of Table 3 every minute would not fit the usual system level model and trigger alerting, but if the system level model is aware of who is accessing that table at a rapid volume, it could identify the user(s) that exhibited the abnormal behavior (e.g. if a particular account accessing the table can be tied to an identifiable individual). Access records may thus include, in some embodiments, data for particular ones of a plurality of components of an electronic resource information such as: one or more indications of when the particular component was previously accessed, one or more indications of particular user accounts used to access the particular component, or one or more indications of access locations associated with previous access attempts. System access model 240 may include expected system access profiles for ones of the plurality of components based on the access records, and identifying one or more access anomalies may be based on a comparison between the particular access indication to the expected system access profiles. System characteristic data may include a variety of information about the hardware and software that corresponds to an electronic device. This information can be helpful as it can be used to locate comparable systems within system access model 240. If a first system has identical or highly overlapping system characteristics with a second system, access data of one or more second systems may be more important (relevant) to determining if an anomaly has occurred relative to the first system. Consider a server cluster that includes 16 machines all with identical (or nearly identical) hardware and software that are all configured to process web page requests. These systems might be expected to have similar access profiles and thus various of the 16 systems may be highly useful to compare to one another to determine if an access anomaly has occurred. System characteristic hardware information data can include number and type of storage devices, network interface devices, processors, cache and other rapid-access memory, display size, etc. Software information can include operating system type and version, network driver type and version (and other hardware drivers as well). Application data may also be included, such as names and versions of different software packages installed. System characteristic data can also include network location information, for example, whether a system is located on premises or in a remote cloud, and physical location data, such as what city, region, country, etc. a system is located in. Like the user behavior model 230, system access model 240 can be built for one or many systems. System access model 240 can in some instances include multiple different models for multiple different systems and/or types of systems. One model type might apply to end-user desktop systems, for example, while another applies to server systems running web-facing application services. Any number of variations are possible in regard to the type and number of system models that may be used. (¶47-¶51). Turning to operation 340 of FIG. 3, EDCS 160 identifies one or more access anomalies related to an electronic resource that has had an access attempt, according to some embodiments. These access anomalies may be identified based on results of processing an access indication through user behavior model 230 and/or system access model 240. (Fig. 3(340), ¶56). Identifying an access anomaly may include a determination that an access attempt is statistically unlikely (e.g. within a particular threshold). This determination threshold can be set to a pre-determined percentile such as 0.1% unlikelihood, or some other number, for example. Multiple different thresholds can be set for different types of access attempts, as well. For access to a highly controlled and important electronic resource (e.g. the master architectural designs for a company's next-generation microprocessor), an access attempt that has even a 10% or 20% unlikelihood might initiate a mitigation response. For heavily accessed resources (e.g., a networked file folder that is used by all employees in a company), the threshold might be set fairly low (e.g. 0.005% unlikelihood). Many different such thresholds are possible for different scenarios. In operation 350, EDCS 160 uses a mitigation model to implement one or more mitigation actions responsive to the one or more identified access anomalies, according to various embodiments. Further, these mitigation actions may also be tracked to see what the results of the actions were, which can be used to refine a mitigation model. (Fig. 3, ¶57-¶58). Results of mitigation actions may also be tracked by mitigation deployment module 220 in various embodiments, and based on an evaluation of these results, a mitigation model can be updated with the goal of producing better mitigation outcomes in the future. Consider the above example of unusual file access on a corporate intranet by an attorney. In the event that file access was denied, this denial could be validated by the attorney's supervisor. For example, the supervisor could be sent an email saying “Attorney X was denied access to file Y at [date, time]. Please let us know if this mitigation action was believed to be proper.” The supervisor could then click an HTML form (or other input mechanism) allowing the supervisor the options (1) “Yes, Attorney X should not have attempted to access this file”, (2) “No, Attorney X should have been able to have access, and this access was denied improperly”, or (3) “Unknown—I am unsure whether Attorney X should have had access.” In the event that denial of file access was proper, the mitigation model can be updated to be more likely to deny file access in similar circumstances. Conversely, if the denial of access was incorrect, the mitigation model can be updated to make it less likely to deny similar access requests in the future. A result of “unknown” could result in no change, or further prompting (e.g. emailing the supervisor and Attorney X both at a later date to prompt the supervisor to investigate the issue further). In another data access scenario in which a user process is reading data to and/or writing data from a database in unexpectedly large amounts, mitigation actions could include rate-limiting the user, disabling access entirely, and/or notifying IT personnel, security personnel, and/or management personnel. Likewise, after the mitigation action, stakeholders could be contacted to solicit feedback on whether the mitigation action was proper or not, resulting in updates to the mitigation model. In some circumstances, evaluating mitigation results and updating a mitigation model may be performed automatically without human intervention. For example, if a user device is consuming too much bandwidth on a network (e.g., the access attempt being the use of network resources), a mitigation action could include capping the user's bandwidth to a specific level (e.g. 512 Kbps), along with an alert to the user that they have been limited. If the user does not complain within a specific timeframe (e.g. through a link provided in an email alert to the user), and the overall congestion on the network is also measured to be lower after the mitigation action is taken, then EDCS 160 can determine automatically that the mitigation action was successful, and update the mitigation model accordingly so that similar situations are more likely to garner a similar mitigation response. Conversely, if the user had a legitimate need for the bandwidth usage (e.g., possibly later verified by a supervisor or another decision maker), and the limitation thus impacted the user's job negatively, then the mitigation model could be updated indicating that the mitigation action produced a negative result (e.g., making the model less likely to attempt a similar action in the future in related circumstances). This self-learning allows for improved mitigation responses as time goes on and more input on mitigation results is gathered. In one embodiment, user behavior model 230 indicates that a particular user account associated with a particular user is expected to be used to access an electronic resource from a particular location during a particular time period, where system access model 240 indicates that the particular user account is expected to be used to access particular components of the electronic resource. System access model 240 may indicate the user account is expected to access the electronic resource based on some threshold of probability being met based on various factors—e.g., the user account has accessed that resource in the past, or similarly situated users have accessed the resource (e.g., co-workers sharing a same job title/responsibility with the user). In this scenario, identifying one or more access anomalies can includes identifying a first anomaly when the particular user account is used to access the electronic resource at a different location than the particular location and identifying a second anomaly when the particular user account is used to access different components of the electronic resource than the particular components. In other words, if a user makes an access attempt for a resource at an unexpected location, this may cause an anomaly. Consider a user “B Smith” who routinely accesses an electronic resource (database “JKL”) from San Jose, Calif. and from Phoenix, Ariz., but has never attempted to access that database when located in Moscow Russia, nor has any one of his co-workers under the same supervisor, nor has his supervisor herself attempted to access that database when their IP address indicates they are located outside of the United States. A rule-based system may simply see that B Smith has privilege access for database JKL, but an intelligent anomaly detector system can calculate that this is a (highly) unusual access pattern based on its models, and takes a mitigation action in response. Likewise, an access anomaly may be detected when a different component of a resource is accessed other than what may be typically expected (e.g. expected within some threshold probability level). If user BSmith routinely accesses TABLE_01 from database JKL and performs about 15 read-only accesses a week, it may be unusual to see user BSmith access TABLE_99 (a different component) from database JKL, and performs 120,000 write operations within a half hour span of time. Thus, such an action may be considered an anomaly, and cause a mitigation action to be taken. (¶60-¶63).
Koottayi et al. (US PGPUB. # US 2018/0288063) discloses, FIG. 1 further illustrates a mechanism for analyzing the information associated with a new access request against a model generated for the user or group of users to determine if the user's access request is anomalous. The incoming real-time data and the historical data values for parameters configured by the behavior analytics engine 320 and/or the integrating application (e.g., the application that the user is requesting access to) can, in certain embodiments, become data points for building the cluster. Once the cluster is established, the behavior analytics engine 320 may be configured to detect anomalous requests from a user based on the model. In some embodiments, the behavior analytics engine 320 is configured to select a user model for the user from a plurality of user models associated with the user. The selection may be based on information regarding the user, the access request received from the user, and/or the target system, application, or resource for the request, for instance, the IP address of the user, the user identifier, and/or the type of application or resource that the user is requesting to access. For example, if the user is requesting access to a financial application on the target system, the behavior analytics engine 320 may be configured to select a model from the plurality of models that analyzes a subset of parameters defined by the financial application. As discussed herein, a financial application might want to analyze access requests from a user based on parameters such as the user id, the time of access, the duration of access, and so on. These parameters are configured with the behavior analytics engine 320, and the behavior analytics engine 320 can build a model for the user over a period of time by analyzing these parameters. Consequently, when a user requests access to the financial application the behavior analytics engine 320 can select the model for determining whether the request is anomalous. In some embodiments, the behavior analytics engine 320 may utilize machine learning and cluster analysis to detect anomalies. For example, once the behavior analytics engine 320 has selected a particular user model, the behavior analytics engine 320 may be configured to analyze the information associated with the access request against the user model and determine if the user's access request is anomalous. In various embodiments, the behavior analytics engine 320 may be configured to determine an anomalous request by determining deviation of the access request from the one or more data clusters generated by the user model. If the deviation is beyond a particular threshold value, the request is determined to be anomalous. The threshold value may be determined by a user (e.g., an administrator) of system 100, or automatically by the behavior analytics engine 320, for example using a standard deviation of 2D or 3D. The behavior analytics engine 320 may be configured to detect anomalies with greater accuracy compared to traditional batch job based detection. The behavior analytics engine 320 may use machine learning to detect anomalies. Advantageously, the model is generated based on historical data and updated using real time data and can constantly learn from this information and determine whether an anomaly should be triggered intelligently. Instead of updating the model every week or every other week, the model can be updated in real time such that an anomalous access can be detected based on all the historical data (including the most recent access by the user). This enables an access request to be identified as an anomaly to be taken into account promptly for the next request. (¶116-¶117).
Kuperman et al. (US PGPUB. # US 2017/0244737) discloses, the model database 311 may be configured to store models being trained and/or active within the system. In example embodiments, the model database 311 may store models specific to particular web applications. Thus, for example, a proxy runtime 305A protecting multiple web applications and/or multiple proxy runtimes 305 protecting different web applications may rely on a consolidated model database 311. (¶81).
However, none of the reference teaches the limitations, “……dedicate a corresponding machine learning model to said each one combination group such that multiple machine learning models are dedicated, wherein each respective multiple machine learning model is tasked with performing anomaly detection for its respective combination grouping; cause the multiple machine learning models to perform anomaly detection on their respective combination groupings…”.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art.
Kang et al. (US PGPUB. # US 2022/0172037) discloses, a method where in response to receiving a request, collect trace data and specifications for a sequence of requests for normal behavior of a microservice application. Embodiments of the present invention can then generate request contextual features from the collected trace data and specification. Embodiments of the present invention can then train a neural network model based on the generated contextual features; and predicting anomalous behavior of the microservice application using the trained neural network model.
Prabhu et al. (US PGPUB. # US 2021/0234877) discloses, proactively protecting service endpoints based on deep learning of user location and access patterns. A machine-learning model is trained to recognize anomalies in access patterns relating to endpoints of a cloud-based service by capturing metadata associated with user accesses. The metadata for a given access includes information regarding a particular user that initiated the given access, a particular device utilized, a particular location associated with the given access and specific workloads associated with the given access. An anomaly relating to an access by a user to a service endpoint is identified by monitoring the access patterns and applying the machine-learning model to metadata associated with the access. Based on a degree of risk to the cloud-based service associated with the identified anomaly, a mitigation action is determined. The cloud-based service is proactively protected by programmatically applying the determined mitigation action.
Salunke et al. (US PGPUB. # US 2020/0351283) discloses, summarizing, diagnosing, and correcting the cause of anomalous behavior in computing systems. In some embodiments, a system identifies a plurality of time series that track different metrics over time for a set of one or more computing resources. The system detects a first set of anomalies in a first time series that tracks a first metric and assigns a different respective range of time to each anomaly. The system determines whether the respective range of time assigned to an anomaly overlaps with timestamps or ranges of time associated with anomalies from one or more other time series. The system generates at least one cluster that groups metrics based on how many anomalies have respective ranges of time and/or timestamps that overlap. The system may preform, based on the cluster, one or more automated actions for diagnosing or correcting a cause of anomalous behavior.
ASSEM ALY SALAMA et al. (US PGPUB. # US 2018/0247220) discloses, detecting data anomalies by a processor. A machine learning model may be trained according to collected scores and anomaly labels of a plurality of anomaly detection operations applied to one or more data sets such that the collected scores and labels identify a degree of accuracy of estimating anomalies for each of the plurality of anomaly detection operations. An anomaly may be detected in an unstructured data set by applying the trained machine learning model on an unstructured data set.
Mermound et al. (US PGPUB. # US 2017/0279830) discloses, a device in a network detects an anomaly in the network by analyzing a set of sample data regarding one or more conditions of the network using a behavioral analytics model. The device receives feedback regarding the detected anomaly. The device determines that the anomaly was a true positive based on the received feedback. The device excludes the set of sample data from a training set for the behavioral analytics model, in response to determining that the anomaly was a true positive.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARSHAN I DHRUV whose telephone number is (571)272-4316. The examiner can normally be reached M-F 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at 571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DARSHAN I DHRUV/Primary Examiner, Art Unit 2498