DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 21-40 are pending in this application.
Claim 21 is amended as part of preliminary amendment submitted on 05/23/2024.
Claims 1-20 are canceled as part of the preliminary amendment submission.
Claims 26-40 are newly added as part of the preliminary amendment submission.
No IDS was filed by the Applicant.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-40 are rejected under 35 U.S.C. 101 because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than an abstract idea.
Step 2A, Prong One-Abstract Idea:
The claims recite, at a high level, “receiving an indication of a first desired modification to a cybersecurity event detector” (e.g., receiving information regarding a modification to a cybersecurity event detector); “for each system event …determining…indicative of a potential cybersecurity event” (e.g., Analyzing system events); “determining a first number of system events of a true positive subset …” (e.g., determining statistical counts of true positives, false positives, and false negatives); and “receiving an indication of a second desired modification to the cybersecurity event detector in the production environment” (e.g., using those statistics to decide whether to modify a detector in a production environment);
These limitations collectively amount to collecting data, analyzing data using mathematical/statistical techniques, and making a decision based on the results, which is a well-recognized abstract idea. See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016).
Step 2A, Prong Two-No Practical Application:
The claims do not integrate the abstract idea into a practical application. Although the claims recite use of: “a sandbox environment”, “a production environment” and “a graphical user interface,” these elements are merely generic computing environments and output mechanisms and do not impose any meaningful limitation on the abstract idea itself.
The claims do not: improve the functioning of a computer, improve network security technology itself, or recite a specific technical mechanism for detecting cybersecurity events. Instead, the claims merely apply the abstract idea using generic computer components as tools, which is insufficient to confer eligibility.
Step 2B -No Inventive Concept:
The claims do not recite additional elements that amount to significantly more than the abstract idea. In particular: “cybersecurity event detector” is claimed purely functionally, without any specific structure or algorithm; “modifying” the detector is result-oriented and unspecified; determining true positives, false positives, and false negatives is a mathematical/statistical evaluation; and displaying statistics via a GUI is just an extra-solution activity. Each of these elements, individually and in combination, represents well-understood, routine, and conventional activity in the field of cybersecurity as of the claimed priority date and do not confer eligibility. Accordingly, claims 21-40 do not recite an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter.
Therefore, claims 21-40 are rejected under 35 U.S.C. 101 as being directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 21-40 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 21, 28 and 35 recite a “first desired modification” and a “second desired modification”. These terms are subjective and do not provide objective boundaries for claim scope. The claims do not specify: the nature of the modification, the scope of the modification, or any criteria by which a modification qualifies as “desired.” As a result, the scope of the claims cannot be reasonably determined. These claims further recite “cybersecurity event detector.” The term “cybersecurity event detector” is recited purely in functional terms without any corresponding structure, algorithm, or operational definition. It is unclear whether the cybersecurity event detector comprises: rules, heuristics, machine learning models, signatures, or other mechanisms. Accordingly, the scope of the claims is indefinite. These claims further recite a limitation “system event.” The term “system event” is undefined. The claims do not specify whether a system event refers to: log entries, network packets, process executions, file system changes, or other system activity. Thus, the metes and bounds of the claims are unclear. These claims further recite a limitation, “actual cybersecurity event.”. The phrase “actual cybersecurity event” lacks an objective definition and appears to depend on hindsight or human judgment. The claims fail to specify: how an event is determined to be “actual”, or what authority or mechanism makes such determination. This renders the scope of the claim’s indefinite. These claims further recite, “the second desire modification being generated based at least in part on one more cybersecurity event detection statistics.” The phrase “based at least in part” is open-ended and does not provide clarity regarding: the degree of reliance on the statistics, or what other factors may influence the modification. Such language fails to distinctly claim the invention.
The dependent claims 22-27, 29-24 and 36-40 includes in the statement of rejection but not specifically addressed in the body of the rejection have inherited the deficiencies of their parent claim and have not resolved the deficiencies. Therefore, these dependent claims are rejected based on the same rationale as applied to their parent claims above.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 21-40 are rejected under 35 U.S.C. 103 as being unpatentable over Vasilenko et al. (US 2017/0147819 A1) (hereinafter, “Vasilenko”) in view of Titonis et al. (US 9,672,355 B2) (Titonis) and further in view of Sekar et al. (US 2020/0059481 A1) (hereinafter, “Sekar”).
As to claim 21, Vasilenko discloses a method, performed by one or more processors, comprising:
receiving an indication of a first desired modification to a cybersecurity event detector in a sandbox environment, the cybersecurity event detector being contemporaneously used for detecting one or more potential cybersecurity events in a production environment (“The shadow sandbox is a virtual machine replica of a computing environment for a protected computing system. The shadow sandbox is maintained through all change events that occur to the computing environment and protected computing system.” -e.g., see, Vasilenko: [0002]; herein, a “virtual machine replica of a computing environment” establishes contemporaneous operation with production; see also: “… detecting a change event on the target computing system, and updating the virtual machine based on the detected change event.” -e.g., see, [0003]; herein, “detecting a change event” and “updating the virtual machine” constitutes receiving an indication of a desired modification; see also: “FIG. 3, described below, is a flowchart for an example method of detecting change and risk events and applying the detected events to a shadow sandbox for use in malware detection.” -e.g., see, [0033]; herein, it should be understood that modification to malware detection behavior as modification to a cybersecurity event detector is functionally equivalent; see also: “… the shadow sandbox is a virtualized computing system maintained in parallel to a protected computing system 120, e.g., using a shadow platform 160.” -e.g., see, [0034]);
modifying, in the sandbox environment, the cybersecurity event detector based on the indication of the first desired modification to the cybersecurity event detector in the sandbox environment (“… detecting a change event on the target computing system, and updating the virtual machine based on the detected change event.” -e.g., see, [0003]; see also: “… a change in the target computing environment and updating, at stage 240, the shadow sandbox based on the detected change. In some implementations, the shadow sandbox is a virtualized computing system maintained in parallel to a protected computing system 120, e.g., using a shadow platform 160. The shadow sandbox can then be modified, examined, and manipulated in manners that may not be possible with the protected computing system 120. The added access can facilitate detection of malware and infectious malicious code.” -e.g., see, [0034]; herein, the malware detection logic must be updated to reflect such changes);
Vasilenko doesn’t explicitly disclose for each system event in a set of system events, determining, in the sandbox environment, whether the respective system event is indicative of a potential cybersecurity event using the modified cybersecurity event detector;
determining a first number of system events of a true positive subset of the set of system events, a second number of system events of a false positive subset of the set of system events, and a third number of systems events of a false negative subset of the set of system events; and
receiving an indication of a second desired modification to the cybersecurity event detector in the production environment, the second desired modification being generated based at least in part on one or more cybersecurity event detection statistics, the one or more cybersecurity event detection statistics including the first number of system events, the second number of system events, and the third number of system events.
However, in an analogous art, Titonis discloses for each system event in a set of system events, determining, in the sandbox environment, whether the respective system event is indicative of a potential cybersecurity event using the modified cybersecurity event detector (“… analyzed for anomalous and malicious behavior using data acquired during the execution of the application within a highly instrumented and controlled environment for which the analysis relies on per-execution as well as comparative aggregate data across many such executions …” -e.g. see, abstract; herein, “highly instrumented and controlled environment” corresponds to a sandbox; “data acquired during execution” necessarily includes individual system events; evaluating each execution for anomalous or malicious behavior corresponds to determining whether each system event indicates a cybersecurity event; see also: “Output logs from the behavioral analysis provide an analyst with fine-grained detail of the malware's actions, including but not limited to, a summary of the analysis, results of third-party antivirus scans, full sandbox simulation logs, screen shots, summary and detail of GUI traversal coverage, summary and detail of network activity, summary and detail of network IP reach observed during the sandbox simulation, summary and detailed annotated analysis for high-level logs such as activity manager and event logs, summary and detail of execution traversal of the user interface, summary and detailed annotated analysis for low-level operating system call logs, summary and annotated analysis over an integrated timeline across such logs, summary and detail of file system integrity analysis, summary and detail of identified network transferred file objects including antivirus scan results, summary and detail of browser activity, behavioral chronologies and statistical profiles extracted from operating system calls, application-level library calls as well as file system operations, CPU and/or memory profiles, summary and detail of intrusion detection alerts, summary and detail of ad-server imposed network traffic load, and summary and detail of network reach into malicious sites of the application during execution.” -e.g. see, col. 3, lines 51-67 to col. 4, lines 1-7; see also: col. 5, lines 54-60);
determining a first number of system events of a true positive subset of the set of system events, a second number of system events of a false positive subset of the set of system events, and a third number of systems events of a false negative subset of the set of system events (“Comparative Confusion Tables (2161) for these documenting true positives, true negatives, false positives, and false negatives in terms of both applications and feature vectors for these (Internal AV Scanner, External AV Scanner, and Machine Learning Clustering Classifier) when each such is compared against the same reference/benchmark oracle” -e.g., see, col. 40, lines 47-53; herein, Comparative confusion tables track TP, TN, FP, FN for AV and ML classifiers vs. reference oracle; see also: “… the analysis relies on per-execution as well as comparative aggregate data across many such executions …” -e.g. see, abstract; herein, comparative aggregate data includes comparative confusion tables data of true positive, true negative, false positive and false negative);
Therefore, it would have been obvious to one of ordinary sill in the art before the effective filing date of the claimed invention to have modified Vasilenko to incorporate the teaching of Titonis in order to provide the data necessary to calculate key performance metrics used to evaluate and tune a system’s effectiveness.
Vasilenko in view of Titonis doesn’t explicitly disclose receiving an indication of a second desired modification to the cybersecurity event detector in the production environment, the second desired modification being generated based at least in part on one or more cybersecurity event detection statistics, the one or more cybersecurity event detection statistics including the first number of system events, the second number of system events, and the third number of system events.
However, in an analogous art, Sekar discloses receiving an indication of a second desired modification to the cybersecurity event detector in the production environment, the second desired modification being generated based at least in part on one or more cybersecurity event detection statistics, the one or more cybersecurity event detection statistics including the first number of system events, the second number of system events, and the third number of system events (“…identifying trustworthiness values in a portion of data associated with the cyber events. Yet a further disclosed operation includes assigning provenance tags to the portion of the data based on the identified trustworthiness values. ” -e.g., see, [0032]; see also: “… the system during forward analysis will generate a final compact scenario graph representation including nodes most relevant to the detected attack using tag-based root-cause and impact analysis,…” -e.g., see, [0253]; see also: “In certain aspects or embodiments, one approach for reducing the size is to use a distance threshold d.sub.th to exclude nodes that are “too far” from the suspect nodes. Threshold d.sub.th can be interactively tuned by an analyst. The system can use the same cost metric that was used for backward analysis, but modified to consider confidentiality aspects as well.” -e.g., see, [0266]; see also: “…a customizable policy framework 4 for tag initialization and propagation may be implemented. A sensible default policy may be used but such policy can also be overridden to accommodate behaviors specific to a particular OS or application. This feature enables tuning of respective detection and analysis techniques in order to avoid false positives in cases where benign applications exhibit behaviors that resemble attacks. Policies also enable an analyst to test “alternate hypotheses” of attacks, by reclassifying what is considered trustworthy or sensitive and re-running the analysis in an alternate scenario.” -e.g. see, [0070]; herein, Sekar teaches detection statistics and analytics (e.g., trustworthiness, tagging, impact analysis) to modify system behavior. Analyst-drive refinement based on observed results corresponds to receiving an indication of a desired modification. Applying such analytics-driven refinements to a production detector based on sandbox results is a predictable feedback loop in cybersecurity systems).
Therefore, it would have been obvious to one of ordinary sill in the art before the effective filing date of the claimed invention to have modified Vasilenko and Titonis to incorporate the teaching of Sekar in order to improve detection accuracy and reduce false positive and false negative.
As to claims 28 and 35, these are rejected using the similar rationale as for the rejection of claim 21.
As to clam 22, Vasilenko in view of Titonis and Sekar discloses the method of claim 21, Sekar further discloses comprising: modifying, in the production environment, the cybersecurity event detector, based on the indication of the second desired modification to the cybersecurity event detector in the production environment (“Threshold d.sub.th can be interactively tuned by an analyst.” -e.g., see, Sekar: [0266]; see also: “ The system uses the distance threshold value d.sub.th to exclude any nodes that are too distant from the entry_point_node in step 82. The analyst may start with a small value for the d.sub.th and refine it successively if needed or desired” -e.g., see, Sekar: [0250]; herein, Sekar teaches changing live detection parameters (thresholds, cost metrics) based on analysis outcomes. Applying the refined configuration to the operational (production) detector is the nature and expected implementation of analyst-driven tuning).
Therefore, it would have been obvious to one of ordinary sill in the art before the effective filing date of the claimed invention to have modified Vasilenko and Titonis to incorporate the teaching of Sekar in order to improve detection accuracy and reduce false positive and false negative.
As to claims 29 and 36, these are rejected using the similar rationale as for the rejection of claim 22.
As to claim 23, Vasilenko in view of Titonis and Sekar discloses the method of claim 21, Titonis further discloses wherein the true positive subset of the set of system events include a system event which was determined, using the cybersecurity event detector prior to modification in the sandbox environment, to be indicative of a potential cybersecurity event and was determined, using the modified cybersecurity event detector, to be indicative of an actual cybersecurity event (“… the analysis relies on per-execution as well as comparative aggregate data across many such executions …” -e.g. see, Titonis: abstract; see also: “Comparative Confusion Tables (2161) for these documenting true positives, true negatives, false positives, and false negatives in terms of both applications and feature vectors for these (Internal AV Scanner, External AV Scanner, and Machine Learning Clustering Classifier) when each such is compared against the same reference/benchmark oracle” -e.g., see, col. 40, lines 47-53; herein, comparative aggregate analysis necessarily compares pre-change vs. post-change classification outcomes. Events confirmed as malicious after refinement corresponding to true positive.).
Therefore, it would have been obvious to one of ordinary sill in the art before the effective filing date of the claimed invention to have modified Vasilenko to incorporate the teaching of Titonis in order to provide the data necessary to calculate key performance metrics used to evaluate and tune a system’s effectiveness.
As to claims 30 and 37, these are rejected using the similar rationale as for the rejection of claim 23.
As to claim 24, Vasilenko in view of Titonis and Sekar discloses the method of claim 21, Titonis further discloses wherein the false positive subset of the set of system events include a system event which was determined, using the cybersecurity event detector prior to modification in the sandbox environment, to be indicative of a potential cybersecurity event and was determined, using the modified cybersecurity event detector, to not be indicative of an actual cybersecurity event (“Anomalous applications are now identified early … as opposed to waiting for users to complain after wide distribution” -e.g., see, Titonis: col. 3, lines 30-37; herein, early sandbox refinement explicitly aims to reduce incorrect detections. Events initially flagged but later dismissed by improved analysis are false positives, evaluated through comparative execution analysis).
As to claims 31 and 38, these are rejected using the similar rationale as for the rejection of claim 24.
As to claim 25, Vasilenko in view of Titonis and Sekar discloses the method of claim 21, Titonis further discloses wherein the false negative subset of the set of system events include a system event which was determined, using the cybersecurity event detector prior to modification in the sandbox environment, not to be indicative of a potential cybersecurity event and was determined, using the modified cybersecurity event detector, to be indicative of an actual cybersecurity event (“A complete data flow graph can determine if risky behaviors, such a sensitive data exfiltration, actually occur with static analysis alone. A complete data flow graph can determine if sensitive data is actually exfiltrated from the device. Rudimentary static analysis without complete data flow may be able to determine that personal information is accessed and that the application transfers data off the device over a network but it cannot determine that the personal information is the data that is transferred off the device. Static analysis with complete data flow can determine if sensitive data is being transmitted off the device using insecure communication techniques.” -e.g., see, Titonis: col. 4, lines 8-39; improved static/behavioral analysis and tag-based prioritization uncover previously missed attacks, which correspond to false negatives under standard detention metrics).
As to claims 32 and 39, these are rejected using the similar rationale as for the rejection of claim 25
As to claim 26, Vasilenko in view of Titonis and Sekar discloses the method of claim 21, Titonis further discloses comprising: generating a graphical user interface; and causing the one or more cybersecurity event detection statistics to be displayed via the graphical user interface (“The Cloud Service includes a Web Server, Controller, Dispatcher, Database, Dashboard, Clustering and Visualization components.” -e.g., see, Titonis: col. 3, lines 6-18; herein, a “Dashboard” and “Visualization components” provides a GUI displaying detection statistics, including analysis outcomes).
As to claim 33, it is rejected using the similar rationale as for the rejection of claim 26.
As to claim 27, Vasilenko in view of Titonis and Sekar discloses the method of claim 26, Sekar further discloses wherein the indication of the second desired modification to the cybersecurity event detector is received via the graphical user interface (“Threshold d.sub.th can be interactively tuned by an analyst.” -e.g., see, Sekar: [0266]; herein, interactive tuning occurs through a user interface, which receives modification inputs).
As to claim 34, it is rejected using the similar rationale as for the rejection of claim 27.
As to claim 40, it is rejected using the similar rationale as for the rejections of claims 26 and 27.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Avasarala et al. (US 9,665,713 B2) teaches a system for detecting zero-day malware using machine learning classifiers trained on partitioned file categories and features like n-grams and system calls overserved in a sandbox environment, where the detector (classifier) is modified through feature selection and retraining to improve accuracy. Sandbox analysis executes files to capture behavioral traces, determining TP/FP rates via anomaly scoring and qualified meta-features that reduce false alarms. These statistics (e.g., improved TP from 80% to 90%, FP reduced from 18% to 7%) guide threshold adjustments and model updates for production deployment. -e.g., see, abstract, Fig. 2, col. 9, lines 120 of Avasarala.
Shukla (US 2008/0016339 A1) teaches an application sandbox that isolates programs in a controlled environment to monitor and detect malware through API hooking, behavioral patterns, and signature analysis allowing modifications to detection rules based on observed actions. It analyzes system events like file/registry modifications and network access in the sandbox to classify true positives (malicious behaviors) and false positive/negatives via integrity checks and statistical profiling. These TP/FP/FN statistics refine sandbox boundaries and rules, informing production environment modifications for enhanced threat containment and reduced errors. -e.g., see, Abstract, [0037], [0039], [0110], [0130] of Shukla.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUMAN DEBNATH whose telephone number is (571)270-1256. The examiner can normally be reached Mon-Fri; 9:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Farid Homayounmehr can be reached at 571-272-3739. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SUMAN DEBNATH
Patent Examiner
Art Unit 2495
/S.D/Examiner, Art Unit 2495
/FARID HOMAYOUNMEHR/Supervisory Patent Examiner, Art Unit 2495