Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The instant application having Application No. 18/610,354 is presented for examination by the examiner. Claims 1, 8 and 15 are amended. Claims 1-20 have been examined.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 8 and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 6-9, 13-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Camp (US 2024/0061937 A1), in view of Bazalgette (US 20240223592 A1).
Regarding Claim 1
Camp discloses:
A method for performing real-time analytics based on generated telemetry, the method comprising:
identifying an executable file and an action associated with the executable file and performed on a host (Camp ¶25: Teaches identifying a process (executable file) and the action it performs (e.g. opening a file) as part of an observed event on a host.);
generating a behavioral graph having nodes and edges based on the executable file and the action (Camp ¶20, 23, 56-57: teaches generating a behavioral (attack) graph having nodes and edges based on an executable file and an action, by modeling events in which a process (created from an executable file) performs actions, adding nodes to a conceptual graph and creating links between nodes according to casual relationship rules in a directed graph data structure.);
upon execution of subsequent actions associated with the executable file continually adding corresponding additional nodes and edges to the behavioral graph (Camp ¶63. 66 and 84: teaches that upon execution of subsequent actions associated with an executable file, corresponding additional nodes and edges are continuously added to a behavior graph, by processing temporally ordered sequences of events initiated by a process, appending newly classified events to an existing attack graph, extending the graph as further events occur, and committing those event to the directed graph when the graph represents a valid attack sentence or fragment.);
identifying that at least one of the plurality of possible subsequent actions is a malicious action (Camp ¶59 and 123: teaches identifying suspicious or malicious attacks based on heuristic, rules, YARA, and knowledge graph analysis.); and
adding a policy to a policy engine to prevent execution of the at least one of the plurality of possible subsequent actions (Camp ¶64: Teaches predictive policy enforcement by blocking or restricting execution based on next predicted attack words in an attack graph.).
Camp teaches generating and continually extending a behavioral (attack) graph having nodes and edges based on actions performed by a process, and predicting subsequent malicious actions using the behavioral graph in order to prevent such actions. However, Camp does not explicitly disclose that the prediction of subsequent actions is based on at least one the additional edges of the behavioral graph. Bazalgette teaches applying graph neural network (GNN) models to cyber incidents graphs comprising nodes and edgers representing entities and activities, and training the GNN on a historical corpus of incident graph to analyze graph structure and behavior over time. Bazalgette further teaches predicting one or more future progressing or actions of an ongoing cyber incident based on similarities between a current graph and historical graph including predicting multiple possible future activities with associated probabilities (e.g. lateral movement, encryption events) and using those predictions to initiate mitigation or prevention actions (¶32, 34, 56, 66).
It would have been obvious to one of ordinary skill in the art to incorporate Bazalgette GNN based graph prediction technique into Camp’s behavioral graphs framework in order to improve prediction accuracy of subsequent malicious actions, as both reference are direct to graph based cybersecurity analytics and predictive threat prevention. Such incorporation allows Camp’s behavioral graph to be evaluated using learned graph progression to predict one or more likely future malicious actions as taught by Bazalgette.
Regarding Claim 2
Camp discloses:
The method of claim 1, further comprising:
identifying a subsequent action associated with the executable file and performed on the host (Camp ¶61-62: Teaches identifying subsequent actions performed on the host by monitoring and filtering events in real time);
updating nodes of the behavioral graph based on the subsequent action (Camp ¶62: Teaches updating nodes of a behavioral graph by provisionally appending and committing events to the attack graph.);
predicting a second plurality of possible subsequent actions based on the updated behavioral graph (Camp ¶64: Teaches predicting multiple subsequent actions using classification of extended graph fragments.);
determining that at least one of the second plurality of possible subsequent actions is the malicious action or another malicious action (Camp ¶62: Teaches identifying that predicted actions are malicious if the graph forms a valid attack sentence.); and
updating the policy to prevent execution of the at least one of the second plurality of possible subsequent actions (Camp ¶64: Teaches predicative policy enforcement by blocking or restricting execution based on next predicted attack words in an attack graph.).
Regarding Claim 6
Camp discloses:
The method of claim 1, further comprising: determining, by the policy engine, an attempt associated with the executable file to perform the at least one of the plurality of possible subsequent actions; and determining, based on determining the attempt, that the executable file is a malicious file (Camp ¶84: Teaches predictive detection of a malicious file by matching an attempted action against previously predicted attack words in the attack graph.).
Regarding Claim 7
Camp discloses:
The method of claim 6, further comprising: preventing, by the policy engine, the execution of the at least one of the plurality of possible subsequent actions based on determining the attempt (Camp ¶64: Teaches predictive policy enforcement by blocking execution based on next predicted words in an attack graph. Camp ¶123: further teaches prevention through active termination of the process upon detecting a predicted malicious action.).
Regarding Claim 8
Claim 8 is directed to a computer-readable storage medium corresponding to the computer-implemented method in claim 1. Claim 8 is similar in scope to claim 1 and is therefore rejected under similar rationale.
Regarding Claim 9
Claim 9 is directed to a computer-readable storage medium corresponding to the computer-implemented method in claim 2. Claim 9 is similar in scope to claim 2 and is therefore rejected under similar rationale.
Regarding Claim 13
Claim 13 is directed to a computer-readable storage medium corresponding to the computer-implemented method in claim 6. Claim 13 is similar in scope to claim 6 and is therefore rejected under similar rationale.
Regarding Claim 14
Claim 14 is directed to a computer-readable storage medium corresponding to the computer-implemented method in claim 7. Claim 14 is similar in scope to claim 7 and is therefore rejected under similar rationale.
Regarding Claim 15
Claim 15 is directed to a system corresponding to the computer-implemented method in claim 1. Claim 15 is similar in scope to claim 1 and is therefore rejected under similar rationale.
Regarding Claim 16
Claim 16 is directed to a system corresponding to the computer-implemented method in claim 2. Claim 16 is similar in scope to claim 2 and is therefore rejected under similar rationale.
Regarding Claim 20
Claim 20 is directed to a system corresponding to the computer-implemented method in claim 6. Claim 20 is similar in scope to claim 6 and is therefore rejected under similar rationale.
Claims 3-5, 10-12 and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Camp (US 2024/0061937 A1), in view of Bazalgette (US 20240223592 A1) as applied to claim 1 above, and in further view of Ding (US 2021/0064751 A1).
Regarding Claim 3
Camp disclose:
The method of claim 1, wherein predicting the plurality of possible subsequent actions includes determining probabilities of possible subsequent actions based on the behavioral graph and a predictive model (Camp ¶85, 90, 112: Teaches using a predictive model (attack model or NLP model) that operates on a behavioral graph (attack graph) to determine probability scores for various next actions. These probabilities reflects how likely each predicted actions is to follow based on prior behavioral patterns encoded in the graph.),
Camp and Bazalgette teaches a system that analyzes real-time executable behavior, constructs behavioral graphs from observed actions, predicts possible malicious subsequent actions, and enforces policies to block those actions based on learned attack patterns. However, they do not disclose the following limitation “selecting a predetermined number of most probable possible subsequent actions”
However, in an analogous art, Ding discloses a top t system/method that includes
and selecting a predetermined number of most probable possible subsequent actions (Ding ¶86: teaches selecting a predetermined number of most probably possible subsequent actions by using a threshold-based method that evaluates path embedding vectors from a province graph and triggers a decision when the top t predicted paths are classified as malicious. This reflects both probability-based ranking and fixed-number selection of the most likely malicious outcomes.).
Given the teachings of Ding, a person having ordinary skill in the art before the effective filing date would have found it obvious to modify the teaching of Camp and Bazalgette by selecting a predetermined number of the most probable subsequent actions for decision-making. Ding teaches using an outlier detection model to evaluate path embedding vectors from a provenance graph and trigger a decision when the top t paths are predicted as malicious, thereby ranking future behavioral sequences and selecting a fixed number of high-probability ones for early classification and mitigation. Since selecting the top t or k results based on model output is a well-known design choice in graph-based predictive analytics and reduces detection overhead while preserving accuracy, it would have been obvious to implement top-N selection on predicated actions to prioritize enforcement or mitigation steps (Ding ¶86).
Regarding Claim 4
Camp disclose:
The method of claim 3, wherein the predictive model is a statistical or machine learning model trained to predict a probability distribution of subsequent actions based on a behavioral graph input (Camp ¶86, 90, 104, 112: teaches a ML model (LSTM, multilayer perception) that is trained to operate on attack graph and output probability values (confidence score) for predicted next action (attack words). These values represent a probability distribution over possible subsequent actions.).
Regarding Claim 5
Camp disclose:
The method of claim 3, wherein the predictive model is trained on a dataset of behavioral graphs of benign software and/or malware (Camp ¶93-94 and 104: discloses generating attack words graphs from detonating malware in a test environment, as well as using public infosec articles to create labeled attack word sequences. These serve as training datasets for the predictive model, which is thereby trained on known malicious behavior.).
Regarding Claim 10
Claim 10 is directed to a computer-readable storage medium corresponding to the computer-implemented method in claim 3. Claim 10 is similar in scope to claim 3 and is therefore rejected under similar rationale.
Regarding Claim 11
Claim 11 is directed to a computer-readable storage medium corresponding to the computer-implemented method in claim 4. Claim 11 is similar in scope to claim 4 and is therefore rejected under similar rationale.
Regarding Claim 12
Claim 12 is directed to a computer-readable storage medium corresponding to the computer-implemented method in claim 5. Claim 12 is similar in scope to claim 5 and is therefore rejected under similar rationale.
Regarding Claim 17
Claim 17 is directed to a system corresponding to the computer-implemented method in claim 3. Claim 17 is similar in scope to claim 3 and is therefore rejected under similar rationale.
Regarding Claim 18
Claim 18 is directed to a system corresponding to the computer-implemented method in claim 4. Claim 18 is similar in scope to claim 4 and is therefore rejected under similar rationale.
Regarding Claim 19
Claim 19 is directed to a system corresponding to the computer-implemented method in claim 5. Claim 19 is similar in scope to claim 5 and is therefore rejected under similar rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAAD A ABDULLAH whose telephone number is (571) 272-1531. The examiner can normally be reached on Monday - Friday, 8:30am - 5:00pm, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached on (571) 272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAAD AHMAD ABDULLAH/Examiner, Art Unit 2431
/SHIN-HON (ERIC) CHEN/Primary Examiner, Art Unit 2431