Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The instant application having Application No. 18/630,106 is presented for examination by the examiner. Claims 1-20 are amended. Claims 1-20 have been examined.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 8 and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown (US 2017/0295190 A1), in view of Herwadkar (US 2019/0102553 A1).
Regarding Claim 1
Brown teaches:
A method executed by a computer system that assesses a database query, comprising:
determining, by the computer system, a detection query timeframe between an endpoint cybersecurity detection associated with a cybersecurity agent and the database query associated with the cybersecurity agent (Brown ¶0008, ¶0022-0030, ¶0054–0057: teaches determining, by a computer system, a detection-query timeframe by correlating an endpoint cybersecurity detection generated by a security agent with a subsequent event using timestamps and a time-lapse requirement, thereby establishing a temporal window between the security detection and a later monitored action (e.g., a subsequent query or activity).);
Brown teaches determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a timeframe between that detection and a subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. Brown is silent as to predicting maliciousness of a database query and blocking the database query. On the other hand, Herwadkar teaches predicting, by a computer system, a malicious operation associated with a database query by extracting attributes of the query, comparing the query against machine-learned distribution models of normal behavior, and assigning a confidence or risk level indicative of whether the query is anomalous or malicious (¶0033–0037, ¶0158–0159). Herwadkar further teaches that, in response to determining that the query is anomalous or exceeds a risk threshold, the system performs an enforcement action including suspending or otherwise blocking execution of the database query, thereby preventing the query from executing (¶0158–0159). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s risk based malicious operation determination and enforcement mechanisms into the teachings of Brown in order to determine that a database query occurring within a detection query timeframe represents a malicious operation and in response, block the database query, yielding predictable results of known cybersecurity monitoring and anomaly detection techniques.
Regarding Claim 2
Brown discloses:
The method of claim 1, further comprising determining that the database query occurs within the detection-query timeframe of the endpoint cybersecurity detection (Brown ¶[0008], ¶[0022], ¶[0025], ¶[0057]: teaches determining, by a computer system, a detection-query timeframe by correlating an endpoint cybersecurity detection generated by a security agent with a subsequent event using timestamps and a minimum time-lapse requirement (i.e., comparing the difference between timestamps to a “minimum time lapse”), thereby establishing a temporal window between the security detection and a later monitored action (e.g., a subsequent query or activity)).
Regarding Claim 3
Brown teaches determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a timeframe between that detection and a subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. Brown, however, does not explicitly teach determining that a database query conforms to a cybersecurity assessment profile. Herwadkar teaches determining whether a database query conforms to a cybersecurity assessment profile by comparing attributes of the query to learned distribution models representing non-anomalous query behavior and determining that the query satisfies similarity thresholds or probability cutoffs such that the query is classified as normal (¶0113–¶0115, ¶0126–¶0156). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s determination of query conformance to a cybersecurity assessment profile into Brown’s security event correlation framework in order to improve the accuracy of determining whether monitored database queries are benign or malicious, yielding predictable results consistent with known cybersecurity monitoring techniques.
Regarding Claim 4
Brown teaches determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a detection-query timeframe between that detection and a subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. Brown, however, does not explicitly teach determining that a database query fails to conform to a cybersecurity assessment profile. Herwadkar teaches determining that a database query fails to conform to a cybersecurity assessment profile by comparing attributes of the query to learned distribution models representing non-anomalous query behavior and determining that the query does not satisfy similarity thresholds, frequency thresholds, or probability cutoffs, such that the query is classified as anomalous (¶0119, ¶0125, ¶0155–¶0156). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s determination of non-conformance to a cybersecurity assessment profile into Brown’s security event correlation framework in order to identify malicious or abnormal database queries following an endpoint detection, yielding predictable results consistent with known cybersecurity monitoring and anomaly-detection techniques.
Regarding Claim 5
Brown teaches determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a timeframe between that detection and a subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. Brown, however, does not explicitly teach determining that a database query is a true positive report. Herwadkar teaches determining that an anomalous query is a true positive by calculating a confidence score and assigning a risk level based on similarity thresholds, probability cutoffs, and joint probability evaluations, thereby distinguishing true malicious queries from false positives (¶0113–¶0115, ¶0155–¶0158). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s confidence-based true-positive determination into Brown’s security event correlation framework in order to improve the reliability of exploit reporting and reduce false positives, yielding predictable results consistent with established cybersecurity detection practices.
Regarding Claim 6
Brown teaches determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a timeframe between that detection and a subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. Brown is silent as to predicting maliciousness of a database query. On the other hand, Herwadkar teaches predicting, by a computer system, a malicious operation associated with a database query by extracting attributes of the query, comparing the query against machine-learned distribution models of normal behavior, and assigning a confidence or risk level indicative of whether the query is anomalous or malicious (e.g., ¶¶0033–0037, 0158–0159). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s risk based malicious operation determination and enforcement mechanisms into the teachings of Brown in order to determine that a database query occurring within a detection query timeframe represents a malicious operation and in response, block the database query, yielding predictable results of known cybersecurity monitoring and anomaly detection techniques.
Regarding Claim 7
Brown teaches correlating endpoint security events and activities using event notifications, object types, activity types, timestamps, and correlation indices to detect exploit activity and trigger preventative actions. Brown, however, does not teach determining a unique cybersecurity event query signature associated with a database query. Herwadkar teaches determining a unique cybersecurity event query signature associated with a database query. Herwadkar discloses converting attributes of an individual database query into a vector representation composed of query-specific features, including query type, tables, columns, privileges, and token counts, thereby generating a representation unique to that query (¶0094–¶0095, ¶0102). Herwadkar further teaches transforming the vector representation into a constant-dimensional signature using hashing or random linear transformations and mapping the signature to hash buckets for anomaly detection and security assessment (¶0108–¶0110). Accordingly, Herwadkar teaches determining a unique cybersecurity event query signature associated with the database query under a broadest reasonable interpretation. It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s vector representation composed of query-specific features into Brown’s security event correlation framework in order to improve identifying suspicious queries in order to detect exploits and trigger preventative actions.
Claims 8-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown (US 2017/0295190 A1), in view of Klien (US 20200097587 A1), and further in view of Herwadkar (US 2019/0102553 A1).
Regarding Claim 8
Brown discloses:
At least one computer system that assesses a database query, comprising:
at least one central processing unit; and
at least one memory device storing instructions that, when executed by the at least one central processing unit, perform operations, the operations comprising:
determining a detection-query timeframe between an endpoint cybersecurity detection detected by a cybersecurity agent and the database query detected by the cybersecurity agent (Brown ¶0008, ¶0022-0030, ¶0054–¶0057: teaches determining, by a computer system, a detection-query timeframe by correlating an endpoint cybersecurity detection generated by a security agent with a subsequent event using timestamps and a time-lapse requirement, thereby establishing a temporal window between the security detection and a later monitored action (e.g., a subsequent query or activity).;
Brown teaches determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a detection-query timeframe by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse to identify subsequent activity temporally related to the detection. Brown, however, does not explicitly teach pre-screening a database query using a machine-learning-generated profile. Klein teaches pre-screening database queries before execution using a cybersecurity service that applies a trained machine-learning classifier to analyze queries and identify malicious or anomalous behavior, where the classifier is trained to recognize normal database behavior and deviations therefrom (Klein ¶¶0005, 0036). Klein further teaches that, in response to identifying a query as potentially malicious, the system may not execute the query or block execution (Klein ¶0039). It would have been obvious to one of ordinary skill in the art to incorporate Klein’s machine-learning based pre-execution query screening into Brown’s detection query timeframe correlation framework in order to evaluate database queries occurring within the detection query timeframe against a learned behavioral profile and prevent execution of malicious queries, yielding predictable results consistent with known cybersecurity monitoring and anomaly detection techniques.
Brown and Klein collectively teach determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a detection query timeframe between that detection and a database query or subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. However, Brown and Klein do not explicitly teach determining that the detection query timeframe itself represents a malicious operation based on an output generated by the cybersecurity service, nor do they explicitly teach blocking the database query in response to such a determination tied to the detection-query timeframe. Herwadkar, on the other hand, teaches predicting, by a computer system, a malicious operation associated with a database query by extracting attributes of the query, comparing the query against machine-learned distribution models of normal behavior, and generating a risk score or confidence indicative of maliciousness (e.g., ¶¶0033–0037, 0158–0159). Herwadkar further teaches that, in response to determining that the query exceeds a risk threshold or is classified as anomalous, the system enforces a security action including suspending or blocking execution of the database query (¶¶0158–0159). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s risk based malicious-operation determination and enforcement mechanisms into the combined Brown and Klein framework in order to determine that a database query occurring within a detection query timeframe represents a malicious operation and in response, block the database query yielding predictable results consistent with known cybersecurity monitoring and anomaly-detection techniques.
Regarding Claim 9
Claim 9 is directed to a system corresponding to the computer-implemented method in claim 2. Claim 9 is similar in scope to claim 2 and is therefore rejected under similar rationale.
Regarding Claim 10
Brown and Klein collectively teach determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a detection query timeframe between that detection and a database query or subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. However, Brown and Klein do not explicitly teach determining that a database query conforms to a cybersecurity assessment profile. Herwadkar teaches determining whether a database query conforms to a cybersecurity assessment profile by comparing attributes of the query to learned distribution models representing non-anomalous query behavior and determining that the query satisfies similarity thresholds or probability cutoffs such that the query is classified as normal (¶0113–¶0115, ¶0126–¶0156). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s determination of query conformance into the systems of Brown and Klein in order to improve the accuracy of determining whether monitored database queries are benign or malicious, yielding predictable results consistent with known cybersecurity monitoring techniques.
Regarding Claim 11
Brown and Klein collectively teach determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a detection query timeframe between that detection and a database query or subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. However, Brown and Klein do not explicitly teach do not explicitly teach determining that a database query timeframe fails to conform to a cybersecurity assessment profile. Herwadkar teaches determining that a database query fails to conform to a cybersecurity assessment profile by comparing attributes of the query to learned distribution models representing non-anomalous query behavior and determining that the query does not satisfy similarity thresholds, frequency thresholds, or probability cutoffs, such that the query is classified as anomalous (¶0119, ¶0125, ¶0155–¶0156). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s determination of non-conformance to a cybersecurity assessment profile into the combined Brown and Klein framework in order to identify malicious or abnormal database queries following an endpoint detection, yielding predictable results consistent with known cybersecurity monitoring and anomaly detection techniques.
Regarding Claim 12
Brown
The at least one computer system of claim 11, wherein the operations further comprise allowing the database query (Brown ¶10-13, 37-39: teaches that preventative actions are taken only when exploit activities are detected, and when no exploitive detections are indicated the monitored operations (query) are permitted without restrictions.).
Regarding Claim 13
Brown and Klein collectively teach determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a detection query timeframe between that detection and a database query or subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. However, Brown and Klein do not explicitly teach do not explicitly teach predicting maliciousness of a database query. On the other hand, Herwadkar teaches predicting, by a computer system, a malicious operation associated with a database query by extracting attributes of the query, comparing the query against machine-learned distribution models of normal behavior, and assigning a confidence or risk level indicative of whether the query is anomalous or malicious (e.g., ¶¶0033–0037, 0158–0159). It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s risk based malicious operation determination and enforcement mechanisms into the teachings of Brown and Klein in order to determine that a database query occurring within a detection query timeframe represents a malicious operation and in response, block the database query, yielding predictable results of known cybersecurity monitoring and anomaly detection techniques.
Regarding Claim 14
Brown and Klein collectively teach determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a detection query timeframe between that detection and a database query or subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. However, Brown and Klein do not explicitly teach do not explicitly teach determining a unique cybersecurity event query signature associated with a database query. Herwadkar teaches determining a unique cybersecurity event query signature associated with a database query. Herwadkar discloses converting attributes of an individual database query into a vector representation composed of query-specific features, including query type, tables, columns, privileges, and token counts, thereby generating a representation unique to that query (¶0094–¶0095, ¶0102). Herwadkar further teaches transforming the vector representation into a constant-dimensional signature using hashing or random linear transformations and mapping the signature to hash buckets for anomaly detection and security assessment (¶0108–¶0110). Accordingly, Herwadkar teaches determining a unique cybersecurity event query signature associated with the database query under a broadest reasonable interpretation. It would have been obvious to one of ordinary skill in the art to incorporate Herwadkar’s vector representation composed of query-specific features into the systems of Brown and Klein in order to improve identifying suspicious queries in order to detect exploits and trigger preventative actions.
Regarding Claim 15
A memory device storing instructions that, when executed by a central processing unit, perform operations, comprising:
monitoring lightweight directory access protocol (LDAP) queries reported via a cloud computing environment by endpoint cybersecurity detection agents monitoring client devices (Brown ¶0014–0019: teaches monitoring security-relevant activity events on client devices via an endpoint security agent and reporting those events to a remote security service implemented as a cloud computing network of nodes. Because the agent’s event collectors observe “all sorts of events” including activities and generate event notifications for remote analysis , this reasonably maps to monitoring LDAP query activity reported via the cloud by endpoint cybersecurity detection agents.);
Brown teaches routing security-relevant, timestamped endpoint agent event notifications to a remote security service and determining a detection-to-event timeframe by evaluating differences between timestamps and/or a minimum time lapse. Brown, however, does not explicitly teach pre-screening a database query using a machine-learning-generated profile. Klein teaches pre-screening query-language statements by routing query/query-derived information to an injection detection component (security service) and using a trained machine-learning classifier that models “normal” behavior and compares query representations (tokens/hash/index values/query statistics/call stack context) to detect anomalous/malicious queries before execution (e.g., ¶0036–0037, 0069–0078, 0079–0084). Thus, it would have been obvious to apply Klein’s ML query pre-screening at Brown’s remote security service for directory queries (LDAP) reported by endpoint agents, using Brown’s detection-query timeframe as an input/correlation feature for the ML-derived assessment profile, thereby pre-screening the LDAP queries by comparison to the ML-generated cybersecurity assessment profile.
Brown and Klein collectively teach determining, by a computer system, an endpoint cybersecurity detection associated with a cybersecurity agent and determining a detection-query timeframe between that detection and a database query or subsequent event by correlating timestamped event notifications and evaluating a difference between timestamps and/or a minimum time lapse. However, Brown and Klein do not explicitly teach generating cybersecurity predictions associated with the pre-screening of the LDAP queries; and blocking the LDAP queries predicted as suspicious operation; and blocking the LDAP queries predicted as suspicious operation. Herwadkar teaches generating a cybersecurity prediction for a query by extracting query attributes, comparing them to machine-learned distribution models of normal behavior, and outputting a risk score/confidence indicative of maliciousness (¶0033–0037, 0158–0159). Herwadkar further teaches that in response to determining the query exceeds a risk threshold / is anomalous, the system performs an enforcement action including suspending or blocking execution of the query (¶0158–0159). Thus, in the combined Brown and Klein framework, Herwadkar’s risk score/confidence is the claimed cybersecurity prediction associated with the pre-screening, and Herwadkar’s enforcement action maps to blocking the LDAP queries predicted as suspicious operation. It would have been obvious to incorporate Herwadkar’s risk-based prediction output and blocking enforcement into Brown and Klein’s query pre-screening pipeline to convert the pre-screening determination into a predictive risk score/confidence and to block queries classified as suspicious, yielding predictable improvements in automated prevention of malicious query operations.
Regarding Claim 16
Claim 16 is directed to a storing instructions corresponding to the computer-implemented method in claim 9. Claim 16 is similar in scope to claim 9 and is therefore rejected under similar rationale.
Regarding Claim 17
Claim 17 is directed to a storing instructions corresponding to the computer-implemented method in claim 10. Claim 17 is similar in scope to claim 10 and is therefore rejected under similar rationale.
Regarding Claim 18
Claim 18 is directed to a storing instructions corresponding to the computer-implemented method in claim 11. Claim 18 is similar in scope to claim 11 and is therefore rejected under similar rationale.
Regarding Claim 19
Claim 19 is directed to a storing instructions corresponding to the computer-implemented method in claim 12. Claim 19 is similar in scope to claim 12 and is therefore rejected under similar rationale.
Regarding Claim 20
Claim 20 is directed to a storing instructions corresponding to the computer-implemented method in claim 13. Claim 20 is similar in scope to claim 13 and is therefore rejected under similar rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAAD A ABDULLAH whose telephone number is (571) 272-1531. The examiner can normally be reached on Monday - Friday, 8:30am - 5:00pm, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached on (571) 272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAAD AHMAD ABDULLAH/ Examiner, Art Unit 2431
/MICHAEL R VAUGHAN/Primary Examiner, Art Unit 2431