Prosecution Insights
Last updated: April 19, 2026
Application No. 18/432,659

SYSTEMS AND METHODS FOR SECURITY EVENT ASSOCIATION RULE REFRESH

Final Rejection §103
Filed
Feb 05, 2024
Examiner
WYSZYNSKI, AUBREY H
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Knowbe4 Inc.
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
635 granted / 710 resolved
+31.4% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
736
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
24.9%
-15.1% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 710 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Response to Arguments Applicant's arguments filed 12/18/25 have been fully considered but they are not persuasive. Applicant argues Sullivan does not identify the number of times a rule identifies different users. The examiner respectfully disagrees. Sullivan, col. 6, line 61, teaches “the conditioning component 185 can generate a query, such as by using a database query language that is compatible with the data store 135 (e.g., a database server associated with the data store), to retrieve event data that is associated with a flagged user or one or more other users. The query can include one or more search conditions for matching tables or records in the data store 135.” In the same paragraph Sullivan continues to describe in col. 7, line 14, “conditioning component 185 is configured to condition the retrieved event data by associating numeric values elements of the event data according to one or more metrics.”. Sullivan teaches the event data is configured according to a security profile. Col. 7, line 53 teaches: the categorical classification criteria can be learned from the aggregated historical event data one or more users. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). As per the dependent claims, Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. In order to further expedite prosecution, the examiner recommends further amending the claims to include specific technical improvements to the inventive concept. As currently drafted, the claims are directed to an administrative process of evaluating and flagging rules for a human to fix. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sullivan 11,671,435 and further in view of Chono, US 2024/0154921. Regarding claim 1, Sullivan discloses a method comprising: executing, by one or more processors (computer system 600 includes at least one processor 602), one or more rules against one or more user records in a user metadata store, the one or more rules configured to match a security event of one or more security events with a user of one or more users using user metadata (col. 6, line 51: the conditioning component 185 can generate a query, such as by using a database query language that is compatible with the data store 135 (e.g., a database server associated with the data store), to retrieve event data that is associated with a flagged user or one or more other users. The query can include one or more search conditions for matching tables or records in the data store 135.); identifying, by the one or more processors, a count of a number of times a rule of the one or more rules identifies a plurality of different users (Fig. 2: classification component 200 for classifying event or data access events in a system for automated investigation of flagged users of a computing resource. Col. 6, line 61: the conditioning component 185 can generate a query, such as by using a database query language that is compatible with the data store 135 (e.g., a database server associated with the data store), to retrieve event data that is associated with a flagged user or one or more other users… according to a security profile. col. 7, line 14: the conditioning component 185 is configured to condition the retrieved event data by associating numeric values elements of the event data according to one or more metrics.); determining, by the one or more processors, that one of the count exceeds a first threshold or a number of the plurality of different users exceeds a second threshold (col. 4, lines 48-55: A data access event can therefore be can be classified based on the SNR or the event and one or more provided or learned thresholds, conditions, or criteria. In an example, a data access event classified into a set of small SNR events or a set of large SNR events based on one or more SNR thresholds, conditions or criteria. Col. 9, line 10: the analytical model is generated based on the history of the user or the history (e.g., historic event data) of one or more other users. Evaluating the model can include providing new event data to the model to generate a risk score, similarity or dissimilarity value, or any other metric that is indicative of changes in the behavior of the user. The identified changes can be evaluated against a provided threshold or criteria to determine whether to generate an alert.). Sullivan lacks or fails to expressly disclose displaying the rule for review. However, Chono discloses displaying, by the one or more processors responsive to the determination, the rule via a user interface to prompt an action to one or more of review, remove or modify the rule by a system administrator (0086: since the QA data managing unit 154 displays the content of the rule 1450 used for creation of the learning data 1460 in the rule display region 174, the administrator can modify, delete, and supplement the QA data while checking what kind of rule 1450 the learning data 1460 is generated from.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Sullivan with Chono to include displaying the rule, in order for an administrator to modify the rule, as taught by Chono, paragraph 0086. Regarding claim 2, Sullivan, as modified above, further discloses the method of claim 1, wherein the rule has a left-hand-side of the rule that comprises a security event identifier of one of the one or more security events and a right-hand-side of the rule that comprises the user metadata (col. 6, line 32: The data structure 140 illustrates a sample of the information that can be included in event data 130. Such information can include an identifier of a data objected that is associated with a data access event and metadata associated with the operation.). Regarding claim 3, Sullivan, as modified above, further discloses the method of claim 1, further comprising determining, by the one or more processors, an ambiguity score for the rule of the one or more rules (col. 9, lines 13-16: Evaluating the model can include providing new event data to the model to generate a risk score, similarity or dissimilarity value, or any other metric that is indicative of changes in the behavior of the user.). Regarding claim 4, Sullivan, as modified above, further discloses the method of claim 3, wherein the ambiguity score is based at least on the count (col. 9, lines 13-16: Evaluating the model can include providing new event data to the model to generate a risk score, similarity or dissimilarity value, or any other metric that is indicative of changes in the behavior of the user.). Regarding claim 5, Sullivan, as modified above, further discloses the method of claim 3, further comprising displaying, by the one or more processors, the ambiguity score with the rule (col. 5, line 29: generated alerts, classification, risk scores, and other security risk information and providing such information to an organization for analysis). Regarding claim 6, Sullivan, as modified above, further discloses the method of claim 1, further comprising determining, by the one or more processors, that a plurality of rules results in an ambiguity of matching a user to the security event (col. 6, line 66: The query can include one or more search conditions for matching tables or records in the data store 135. The search conditions can be selected to retrieve tables or records that store information that this useful for analyzing the behavior of the flagged user.). Regarding claim 7, Sullivan, as modified above, further discloses the method of claim 1, further comprising executing, by the one or more processors, one or more rules of a same type from a combined rule list (col. 6, line 66: The query can include one or more search conditions for matching tables or records in the data store 135. The search conditions can be selected to retrieve tables or records that store information that this useful for analyzing the behavior of the flagged user.). Regarding claim 8, Sullivan, as modified above, further discloses the method of claim 1, further comprising determining, by the one or more processors, a ranked list of the one or more rules to one or more of review, remove or modify and displaying, by the one or more processors, the ranked list of the one or more rules to prompt the action to review, remove or modify by the system administrator (col. 7, line 50: categorical classification criteria are provided by an organization as a list of data access events that belong to a specified risk category. In another example, the categorical classification criteria can be learned from the aggregated historical event data one or more users.). Regarding claim 9, Sullivan, as modified above, further discloses the method of claim 1, further comprising executing, by the one or more processors, a combined rule list against security event identifiers of security events in an unmapped security event store, the combined rule list updated to exclude the rule identified by the one or more processors (col. 5, lines 25-27: The automated investigation can provide updated information regarding usage patterns and security risks in the form of alerts or reports.). Regarding claim 10, Sullivan, as modified above, further discloses the method of claim 1, further comprising triggering, by the one or more processors, execution of the combined rule list against security event identifiers of security events in an unmapped security event store responsive to one of new user metadata or new user records in the user metadata store (col. 7, line 21: the automated investigation includes analyzing data access events that are generated after a triggering event to identify new anomalous or high-risk events and any changes in usage patterns that may be indicative of a security breach or a security risk. The automated investigation can provide updated information regarding usage patterns and security risks in the form of alerts or reports.). As per claims 11-20, this is a system version of the claimed method discussed above in claims 1-10 wherein all claimed limitations have also been addressed and/or cited as set forth above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AUBREY H WYSZYNSKI whose telephone number is (571)272-8155. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at 571-270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUBREY H WYSZYNSKI/Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Sep 20, 2025
Non-Final Rejection — §103
Dec 11, 2025
Response Filed
Mar 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598211
CYBERATTACK SCORING METHOD, CYBERATTACK SCORING APPARATUS, AND COMPUTER READABLE STORAGE MEDIUM STORING INSTRUCTIONS TO PERFORM CYBERATTACK SCORING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12592932
METHOD AND SYSTEM FOR AN INTEGRATED PROCESS TO STREAMLINE PRIVILEGED ACCESS MANAGEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580964
OPTIMIZATION FOR ACCESS POLICIES IN COMPUTER SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12580887
SCALABLE FLOW DIFFERENTIATION FOR NETWORKS WITH OVERLAPPING IP ADDRESSES
2y 5m to grant Granted Mar 17, 2026
Patent 12580967
CONTEXTUAL SECURITY POLICY ENGINE FOR COMPUTE NODE CLUSTERS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+12.6%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 710 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month