DETAILED ACTION
Claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Applicant is advised that should claim 11 be found allowable, claim 1 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). To the best of Examiner’s determination, claim 1 appears to be claim 11 written in independent form, including the limitations from the corresponding parent claims 2, 7, 9, & 10.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 6-17, & 20 are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Beauchesne (U.S. Patent Publication 2020/0329062).
Regarding claims 2 and 17:
Beauchesne discloses a method and non-transitory computer readable medium for detecting unauthorized access to a networked application based on discerning user activity data, comprising: receiving a combined activity dataset, wherein the combined activity dataset comprises a plurality of activities corresponding to a combined account, and wherein the combined account is associated with a plurality of users (paragraphs 0040-0043, including ¶ 0040:“Generally, the process includes capturing a set of token/authorization requests/responses or logs that provide the equivalent information, scoring the associated accounts, hosts, and services to implement a machine learning process to generate quantitative models that capture the relationships among relative scores of the account, hosts and service” and ¶ 0043: “Generally, this includes a process for determining relative values as represented by privilege scores for hosts, accounts, and services associated with token/authorization requests/responses during a training period corresponding to the training dataset. These privilege scores indicate the operational privilege of an account, host or service. For instance, an everyday email account would likely have a low score indicating it is probably a low privilege account (independent of the actual privileges in a policy profile of the account), whereas one of only a few accounts that are authorized to access a particular file server would have a high score indicating that it is probably a high privilege account (independent of the actual privileges in a policy profile of the account)”; see also paragraphs 0020-0021, including “For instance, for most companies each employee will have one or more email accounts”); updating a base breach detection model based on the combined activity dataset to generate a combined breach detection model, wherein the combined breach detection model is trained to detect breach activity for the plurality of users (Ibid, including ¶0040: “Furthermore, the process includes control for updating the scoring and model(s) when a set of conditions are satisfied.” and ¶0041: ”Meanwhile, the training phase also captures/analyzes the new network activity and uses the new network activity to update the machine learning as represented by the model(s) when one or more corresponding conditions are met”); in response to receiving an indication of a first user account created for a first user of the plurality of users, duplicating the combined breach detection model to generate a first breach detection model that is linked to the first user (paragraph 0063, including “In some embodiments, accounts will be used on multiple devices. For instance, while most accounts will typically operate out of a single host (e.g. laptop/desktop computer) administrators might have accounts that they use on multiple devices…”); subsequent to generating the first breach detection model, receiving a first activity dataset for the first user (paragraphs 0066-0069, including: ¶ 0066: “The process starts at 302b, where new token/authorization request/response data is captured as discussed in regard to 202, 302a, and 305a above” and ¶0069: “At 312b, processing is initiated for the newly captured data using the previously generated model(s) to identify potentially malicious activity”); training a labeling model to associate activities from the first activity dataset with the first user (paragraph 0067: “Subsequently, each newly captured token/authorization request and/or response is processed to retrieve an account privilege score at 306b, service privilege score at 308b, and host privilege score at 310b—e.g. by identifying the relevant information (host, account, and service) and performing a lookup operation on the captured token/authorization request/response and/or log data from 202 used to execute the machine learning process at 204”; see also paragraph 0043: “Generally, this includes a process for determining relative values as represented by privilege scores for hosts, accounts, and services associated with token/authorization requests/responses during a training period corresponding to the training dataset…Using these privileged scores, the detection engine 112 uses a machine learning process to generate models that can be used to determine if a current access is malicious, where a malicious access corresponds to a lack of congruence in the privilege scores of the entities involved in the access”); processing the combined activity dataset using the labeling model to associate activities from a first portion of the combined activity dataset with the first user (paragraph 0067, including: ”Subsequently, each newly captured token/authorization request and/or response is processed to retrieve an account privilege score at 306b, service privilege score at 308b, and host privilege score at 310b—e.g. by identifying the relevant information (host, account, and service) and performing a lookup operation on the captured token/authorization request/response and/or log data from 202 used to execute the machine learning process at 204. For instance, the account privilege score, at 306b, can be retrieved by matching the account to a previously generated account privilege score computed as discussed above in regard to 306a (and optionally 307a)…”); and updating the base breach detection model based on the activities from the first activity dataset and the activities from the first portion of the combined activity dataset to generate a second breach detection model, wherein the second breach detection model is trained to detect breach activity for the first user (paragraph 0051, including “At 210, a determination is made as to whether the previously generated models should be updated to account for more recently captured data—e.g. the recent processed token/authorization request/response activity”).
Regarding claim 6: Beauchesne further discloses wherein updating the base breach detection model based on the combined activity dataset to generate the combined breach detection model comprises: determining, for the plurality of activities, a plurality of breach indicators, wherein each breach indicator of the plurality of breach indicators indicates whether a corresponding activity of the plurality of activities is associated with a breach (paragraphs 0069-0074 and Figures 4A-4B); and updating the base breach detection model based on the plurality of activities and the plurality of breach indicators to generate the combined breach detection model (Ibid; see also paragraph 0051).
Regarding claims 7 and 20: Beauchesne further discloses delinking the combined breach detection model from one or more remaining users of the plurality of users, wherein the one or more remaining users comprise the plurality of users associated with the combined account subsequent to creating the first user account (paragraph 0058, including “Thus, only account, service/server, and host combinations that were actually requested or provided during a training period are relevant in determining scores. This avoids counting stale and unused accounts when trying to determine operational privileges of a resource. For instance, this avoids counting the accounts for employees that are not currently active (e.g. employees on leave, vacation, or no longer with the company), and thus avoids diluting the scores of those that are actually active working for, or on behalf of, an organization”); receiving a second activity dataset for the one or more remaining users (paragraphs 0066-0069); and training the labeling model to associate activities from the second activity dataset with the one or more remaining users (Ibid; see also paragraph 0043).
Regarding claim 8: Beauchesne further discloses wherein training the labeling model comprises: based on an account database, determining a first account identifier for the first user account corresponding to the first user and a second account identifier for the combined account corresponding to the one or more remaining users (e.g. paragraph 0059); generating first training data, wherein the first training data comprises the first account identifier and the first activity dataset (paragraphs 0040-0043); generating second training data, wherein the second training data comprises the second account identifier and the second activity dataset (Ibid); and based on the first training data and the second training data, training the labeling model to associate activities of the plurality of activities with the combined account or with the first user account (paragraph 0043).
Regarding claim 9: Beauchesne further discloses processing the combined activity dataset using the labeling model to associate activities from a second portion of the combined activity dataset with the one or more remaining users (paragraphs 0066-0069); and updating the base breach detection model based on the activities from the second activity dataset and the second portion of the combined activity dataset to generate a third breach detection model, wherein the third breach detection model is trained to detect breach activity for the one or more remaining users (Ibid, see also paragraphs 0043 & 0051).
Regarding claim 10: Beauchesne further discloses subsequent to generating the third breach detection model, receiving a fourth activity dataset corresponding to the one or more remaining users and a plurality of breach indicators, wherein each breach indicator of the plurality of breach indicators comprises an indication of whether a corresponding activity of the fourth activity dataset is associated with breach activity (paragraphs 0069-0074 and Figures 4A-4B); and updating the third breach detection model by training the third breach detection model using the fourth activity dataset and the plurality of breach indicators (Ibid, see also paragraphs 0043 & 0051).
Regarding claim 11: Beauchesne further discloses in response to receiving an indication of a second user account created for a second user of the one or more remaining users, duplicating the third breach detection model to generate a fourth breach detection model that is trained to detect breach activity for the second user (paragraph 0063); based on receiving a third activity dataset corresponding to the second user account, training the labeling model to associate activities from the third activity dataset with the second user (paragraphs 0066-0069); and based on processing the combined activity dataset and the second activity dataset using the labeling model, updating the base breach detection model to generate a sixth breach detection model, wherein the sixth breach detection model is trained to detect breach activity for the second user (Ibid, see also paragraphs 0043 & 0051).
Regarding claim 12: Beauchesne further discloses wherein processing the combined activity dataset using the labeling model comprises processing the plurality of activities using the labeling model to generate a plurality of account identifiers, wherein each account identifier of the plurality of account identifiers associates a corresponding activity of the plurality of activities with a first account identifier corresponding to the first user account or a second account identifier corresponding to the combined account (paragraph 0059).
Regarding claim 13: Beauchesne further discloses wherein updating the base breach detection model to generate the second breach detection model comprises: based on matching each account identifier of the plurality of account identifiers with the corresponding activity of the plurality of activities, determining a subset of activities, wherein the subset of activities comprises labeled activities of the plurality of activities that correspond to the first account identifier (paragraphs 0040-0043 & 0051); and updating the base breach detection model based on the activities from the first activity dataset and the subset of activities to generate the second breach detection model (Ibid).
Regarding claim 14: Beauchesne further discloses subsequent to generating the second breach detection model, receiving a third activity dataset corresponding to the first user and a plurality of breach indicators, wherein each breach indicator of the plurality of breach indicators comprises an indication of whether a corresponding activity of the third activity dataset includes a breach activity (paragraphs 0069-0074 and Figures 4A-4B); and updating the second breach detection model by training the second breach detection model using the third activity dataset and the plurality of breach indicators (Ibid; see also paragraph 0051).
Regarding claim 15: Beauchesne further discloses receiving updated model parameters for the base breach detection model (paragraphs 0040-0043); updating the base breach detection model based on the updated model parameters (Ibid, and paragraph 0051); and updating the second breach detection model based on training the base breach detection model using the activities from the first activity dataset and the first portion of the combined activity dataset (Ibid).
Regarding claim 16: Beauchesne further discloses updating the first breach detection model based on the activities from the first activity dataset and the first portion of the combined activity dataset to generate the second breach detection model, wherein the second breach detection model is trained to detect breach activity for the first user (paragraphs 0040-0043).
Regarding claim 1:
The rejection of claims 2, 7, 9, & 10 apply mutatis mutandis to claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-5 & 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Beauchesne as applied to claim 2 above, and further in view of Lal (U.S. Patent Publication 2024/0403420).
Regarding claims 3 and 18: Beauchesne further discloses receiving a first activity associated with the first user account, wherein the first activity comprises an indication of an account-related event (paragraph 0067, including: ”Subsequently, each newly captured token/authorization request and/or response is processed to retrieve an account privilege score at 306b, service privilege score at 308b, and host privilege score at 310b—e.g. by identifying the relevant information (host, account, and service) and performing a lookup operation on the captured token/authorization request/response and/or log data from 202 used to execute the machine learning process at 204. For instance, the account privilege score, at 306b, can be retrieved by matching the account to a previously generated account privilege score computed as discussed above in regard to 306a (and optionally 307a)…”); and based on processing the first activity using the second breach detection model, generating a privilege score to determine if the activity is malicious (paragraphs 0066-0069);although it is unclear if this privilege score can be construed to be a breach probability, wherein the breach probability indicates a likelihood that the first activity is not associated with the first user. However, Lal discloses a related invention for breach detection using machine learning (e.g. Abstract and paragraph 0028) wherein this limitation is taught (Lal, paragraphs 0208-0209, including “The assessment module 325 with the AI classifiers output can be a score (ranked number system, probability, etc.) that a given identified process is likely a malicious process” and “The assessment module 325 with the AI classifiers can be configured to assign a numerical assessment, such as a probability, of a given cyber threat hypothesis that is supported…”; see also paragraphs 0175-0180). It would have been obvious prior to the effective filing date of the instant application for Beauchesne to calculate a breach probability to detect malicious activity, as the use of probabilistic mathematics allows the AI model to always be up to date on what current normal behavior is without being reliant on human input, while also being able to see hitherto undiscovered cyber events that would otherwise have gone unnoticed (Lal, paragraph 0170).
Regarding claims 4 and 19: The combination further discloses comparing the breach probability with a threshold probability, wherein the threshold probability indicates a probability value where a potential breach has occurred (Beauchesne, paragraphs 0064 & 0069; Lal, paragraph 0176); and based on determining that the breach probability is greater than the threshold probability, generating a first message for display on a user interface, wherein the user interface is associated with a first user device for the first user, and wherein the first message comprises an indication of a potential breach (Beauchesne, paragraph 0050; Lal, paragraph 0219).
Regarding claim 5: The combination further discloses based on generating the first message, receiving a user response from the first user device, wherein the response indicates whether the first activity is associated with the first user; and based on determining that the response indicates that the first activity is not associated with the first user, determining to deactivate the first user account (Beauchesne, paragraph 0050; Lal, paragraph 0219).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
U.S. Patent Publication 2025/0045437 (Koshti)
U.S. Patent Publication 2023/0370476 (Bakshi)
U.S. Patent Publication 2022/0131878 (Tokosch)
U.S. Patent Publication 2019/0098036 (Kyle)
U.S. Patent Publication 2016/0330217 (Gates)
“Deep Learning Approach for Intelligent Intrusion Detection System” (Vinayakumar)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS A GYORFI whose telephone number is (571)272-3849. The examiner can normally be reached 10:00am - 6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Hirl can be reached at 571-272-3685. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
THOMAS A. GYORFI
Examiner
Art Unit 2435
/THOMAS A GYORFI/Examiner, Art Unit 2435 2/6/2026