Detailed Action
The office action is in response to the communication dated on 08/22/2025 and 03/17/2025.
In the communication dated on 08/22025, claims 1 and 4-7 are amended, claims 11-15 are newly added, and all other claims are previously presented.
Claims 1-15 have been examined.
Claims 1-15 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Inaccurate number is 5
Priority
The instant application, filled 02/02/2023, claims priority benefit from provisional application number 63/306,889. The prior-filed applications, provide adequate support in the manner provided by 35 U.S.C. 112(a) or pre-AIA U.S.C. 112, first paragraph for one or more claims of this application. Therefore, the effective filing date for the pending claims is 02/04/2022.
Information Disclosure Statements
The information disclosure statement (IDS) submitted on 08/27/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s argument, filed on 03/17/2025, with respect to the rejection of claims 1-10 under 35 USC 112(b) has been fully considered. The argument is found persuasive due to the amendment to the pending claims 1, 4 and 6-7. Therefore, the previously issued claim rejection under 35 USC 112(b) to the pending claims 1-10 is now withdrawn.
Applicant’s argument, filed on 03/17/2025, with respect to the Double Patenting rejection for the pending claims 1-10 has been fully considered. The previously issued Double Patenting rejection is now withdrawn in view of the amendment.
Applicant’s argument, filed on 03/17/2025 and 08/22/2025, with respect to the rejection for the pending claims 1-10 under 35 USC 103 has been fully considered. Therefore, the previously issued claim rejection under 35 USC 103 to the pending claims 1-10 is now withdrawn. However, upon further consideration, new grounds of rejection are made, at least, in view of previously applied references by Bakthavatchalam and Wright in addition to a newly applied reference by Shou (US Patent 8,549,643 B1), hereinafter Shou. Specifically, Shou cures the deficiency of the combination of the previous applied references by Bakthavatchalam and Wright with the teaching of the newly amended claim features, such as the security system using a data loss prevention (DLP) agent/system that is configured to monitor data traffic for confidential information crossing a trust boundary and notify a system administrator, as required by the amended claim 1. Please refer to the details of the prior-art rejection to the newly amended claim below.
In regard to Applicant’s argument for claim 2 based on Applicant’s remarks, filed on 03/17/2025,
about Bakthavatchalam's rule-based comparisons are different from the present claimed invention because the claimed pattern or anomaly comparison of the present claimed invention includes models with unsupervised learning, the argument is not found persuasive. Specifically, the Office would like to point out that the claim language of claim 2 never explicitly recites the pattern or anomaly comparison includes models with the use of unsupervised learning. Therefore, the rejection of claim 2 is maintained, at least, based on the teaching of Bakthavatchalam and Wright. This is particular true as parag. [0025] of Bakthavatchalam describes the one or more characteristic including a pattern or an anomaly, while parag. [0057] of Wright further describes the pattern or an anomaly in comparison to authentic behavior.
In regard to Applicant’s argument for claims 3-10, it is found persuasive that the previously applied references by Bakthavatchalam, Wright and Parker would not be sufficient to render these claims obvious due to the amendment to the pending Claim 1. However, as stated above, new grounds of rejection based on the previously applied references by Bakthavatchalam, Wright (and Parker) in addition to the newly applied reference by Shou would be sufficient to render the each of the claims 3-10 obvious.
Claim Objections
Claims 1 and 14 are objected to because of the following informalities:
Claim 1 is objected for failing to explicitly include any specific component(s) of the claimed security system. A proper device/system should, at least, recite an element that further describe the structure of the recited device/system.
Claim 14 is objected for the recitation, “… agent is configured for an administrator may make real-time threshold adjustments”, as the phrasing of the recitation is unclear. The recitation may be modified as “… agent is configured for an administrator to make real-time threshold adjustments”
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, 8-11 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over by Bakthavatchalam et al. (US 20180114023 A1), hereafter Bakthavatchalam, Wright (US 20210021592 A1), hereafter Wright, and in view of Shou (US Patent 8549643 B1), hereinafter Shou.
Regarding claim 1:
Bakthavatchalam teaches a security system for implementing a threat characteristic recognition process in a computing environment (Bakthavatchalam [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”)), the security system configured to:
monitor data traffic [at one or more access points of the computing environment] (Bakthavatchalam [0021] In various embodiments disclosed herein, network traffic is compressed and then malware-searched within a hardware-accelerated rule search engine [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”) [0025] … malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic);
provide the data traffic to the security system as an input for analysis (Bakthavatchalam [0026] FIG. 2 illustrates an embodiment of a malware detection module 150 (e.g., that may be deployed within the ingress security engine 103 of FIG. 1) having a rule buffer 151 and a hardware-accelerated rule search engine 155. As shown, rule buffer 151 receives rules from a source within control plane 122 (e.g., policy engine 133 of FIG. 1) and forwards or otherwise makes those rules available to rule search engine 155. Rule search engine 155 additionally receives inbound traffic from the data plane 120 and outputs a rule-search result (“RS Result”) to notify downstream functional blocks (e.g., flow management 131 unit of FIG. 1) of a malware detection event upon confirming a match between a rule (malware signature) and contents of the inbound traffic);
identify one or more characteristics of the data traffic (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131);
compare the one or more characteristics of the data traffic to characteristics stored on one or more databases corresponding to suspicious or malicious behavior (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131 [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events);
prevent access to the computing environment or transmission of the data traffic if the one or more characteristics match with the characteristics stored on the one or more databases (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131. As discussed below, flow management unit 131 may take various actions with respect to reported malware detections, including blocking malware-infested traffic flows and/or seizing information with respect to such flows to enable forensic or other advanced security measures [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events); and
the security system uses a data loss prevention (DLP) agent configured to monitor data traffic for confidential information crossing a trust boundary, and block transmission of the data traffic and/or notify a system administrator.
Bakthavatchalam does not expressly teach:
the security system configured to: monitor data traffic at one or more access points of the computing environment; and
determine if one or more features are unauthorized actions or from an unauthorized actor based on the characteristics.
However, Wright teaches a security system for implementing a threat characteristic recognition process in a computing environment (Wright [0003] Disclosed examples relates to a system for securing devices and data in a computing environment [0004] In some examples, the disclosed security system is configured to provide protection for computing and networked devices from threats, as well as to protect data [0038] As the transmission characteristics evolve (e.g., from one generation of cellular transmission to the next), the range of frequencies and/or potential threats associated with those characteristics will be updated and provided to a user and/or administrator),
the security system configured to: monitor data traffic at one or more access points of the computing environment (Wright [0026] In some examples, security systems and methods are employed to identify threats and/or act to mitigate threats on one or more IoT connected devices. In an example, an agent (e.g., software and/or hardware driven, such as an Ethical Agent powered by AI) can be employed into an IoT environment to scan devices and/or data traffic. The agents can scan for threats, such as connection or attempted connection to the network and/or devices from an unauthorized source [0110] In some examples, the algorithm scanning engine 122 (e.g., software and/or hardware, such as a secure FPGA configured to implement the algorithm scan) can be integrated into a system that collects, transmits, stores, and/or otherwise processes the inputs for the algorithm. This may include a server, a processor, a transmission component (e.g., a router, a cellular tower, a satellite, etc.), such that the algorithm scanning engine 122 may identify implementation of such an algorithm and provide the information to an administrator, the authorities, and/or automatically modify the algorithm's behavior);
determine if the one or more features are unauthorized actions or from an unauthorized actor based on the characteristics (Wright [0011] … The security systems and methods actively look for signatures of such threats. [0054] In some examples, malware or other malicious content may exist on the client device and attempt to exploit data and/or functionality of the client device. In examples, the malicious payload(s) are prevented from being downloaded, either by having been identified as malware in advance (e.g., known malware, malware as identified by an agent, etc.), and/or by recognizing unusual behavior from the sender (as disclosed herein), such that the download is blocked and/or routed to a diversion environment for additional processing [0055] In the event that malicious payloads are downloaded and executed on the client device, the security system functions to detect the malicious data post exploitation. This can be due to unusual activity (e.g., transmitting data in the absence of a request and/or authorization from the user), and/or identification of the result of the malware as being on a list of malicious data (e.g., identified by an agent and communicated to the client device and/or user). Once identified, the security system is designed to block further execution of the malware (e.g., end processing of the affected component, end transmission of data, disconnect from the network, etc.), and/or route the malware and/or traffic to a diversion environment for additional processing).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Bakthavatchalam and Wright before them, to modify Bakthavatchalam that teaches a security system to include monitoring data traffic at one or more access points of the computing environment and determine if the features are unauthorized actions or from an unauthorized actor based on the characteristics. One would have been motivated to have a secure system in which the system continuously monitors for known threats, as well as proactively pursues information on emerging or unknown threats as taught by Wright (see Wright [0011]).
Bakthavatchalam and Wright do not expressly teach:
the security system uses a data loss prevention (DLP) agent configured to monitor data traffic for confidential information crossing a trust boundary, and block transmission of the data traffic and/or notify a system administrator.
However, Shou teaches the security system uses a data loss prevention (DLP) agent configured to monitor data traffic for confidential information crossing a trust boundary (Shou, Col. 16, lines 53-62: “The DLP system 500 may be a host based DLP system (e.g., host based DLP system 115 of FIG. 1) or a network based DLP system (e.g., network based DLP system 132 of FIG. 1). The DLP system 500 may monitor different data loss vectors, applications, data, etc. to detect attempts to move sensitive data and bait data off of an endpoint device and/or off of an enterprise's network. Additionally, the DLP system 500 may monitor traffic to identify deviations in decoy traffic. A network based DLP system may monitor network traffic as it passes through, for example, a firewall”), and block transmission of the data traffic and/or notify a system administrator (Shou, Col. 18, lines 52-67: “Policy violation responder 525 applies one or more DLP response rules 580 when a DLP policy violation is detected. Each DLP response rule 580 may be associated with one or more DLP policies 570. Each DLP response rule 580 includes one or more actions for policy violation responder 525 to take in response to violation of an associated DLP policy 570. Once a violation of a DLP policy 570 is discovered, policy violation responder 525 may determine which DLP response rules are associated with the violated DLP policy 570. One or more actions included in the response rule 580 can then be performed. Examples of performed actions include sending a notification to an administrator, preventing the data from exiting an endpoint device through a data loss vector, locking down the computer so that no data can be moved off of the endpoint device through any data loss vector, encrypting data as it is moved off the endpoint device, and so on”).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the combined teaching of Bakthavatchalam and Wright to be further modified by Shou’s teaching of data loss prevention agent and technique. One would have been motivated to have the computing device that identifies the potential security threat by the DLP system in response to determining that at least one of the network traffic or the bait data deviates from expected values (see Shou, Col. 1, lines 55-59).
Regarding claim 2:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam further teaches wherein the one or more characteristics include a pattern or an anomaly [in comparison to authentic behavior] (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131 [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events).
Bakthavatchalam does not expressly teach wherein the one or more characteristics include a pattern or an anomaly in comparison to authentic behavior.
However, Wright further teaches wherein the one or more characteristics include a pattern or an anomaly in comparison to authentic behavior (Wright [0057] In some examples, the security system recognizes trends in user behavior, such that anomalous actions and/or traffic can be identified and investigated (e.g., by routing to a diversion environment). This can be implemented by historical tracking and/or application of AI tools to make connections (e.g., between trusted devices), recognize patterns (e.g., in user behavior), identify associated individuals and locations (e.g., within an organization, family, etc.). Thus, when an anomalous event occurs, the security system may evaluate the risk and determine suitable actions suitable to mitigate the risk).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright, and Shou before them, to have the system feature wherein the one or more characteristics include a pattern or an anomaly in comparison to authentic behavior. One would have been motivated using the same reasoning as in claim 1.
Regarding claim 3:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam further teaches wherein the one or more characteristics [include a number of login attempts beyond a threshold number, a number of unsuccessful login attempts beyond a threshold number, a request for unauthorized data from an authorized user, or a request for an amount of data beyond a threshold amount] (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131 [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events).
Bakthavatchalam does not expressly teach wherein the one or more characteristics include a number of login attempts beyond a threshold number, a number of unsuccessful login attempts beyond a threshold number, a request for unauthorized data from an authorized user, or a request for an amount of data beyond a threshold amount
However, Wright further teaches wherein the one or more characteristics include a number of login attempts beyond a threshold number, a number of unsuccessful login attempts beyond a threshold number, a request for unauthorized data from an authorized user, or a request for an amount of data beyond a threshold amount (Wright [Claim 1] … authenticate, using the biometric entry, the user of the client device; and permit the user access to data on the client device responsive to authentication of the user via the biometric entry [0012] Additionally or alternatively, the security systems and methods are configured to operate in the absence of a networked connection and/or a primary power source. For example, software and/or hardware can be installed on a client device, which is designed to scan the software and/or hardware of the client device to detect and/or address threats. There are particular advantages for devices that are configured for extended periods of sleep and/or passive and/or on-demand operation, such as smart speakers, device connected to the Internet of things (IoT), logistical waypoints, communications equipment, as a non-limiting list of examples [0027] In examples, the IoT connected devices are authorized to capture a particular type of information (e.g., a near field communication (NFC) enabled smart device to access a building, transfer information, payment, etc.; a biometric scanner; electric car charging station sensors; ultrasound sensors; etc.). The disclosed security systems and methods can scan associated sensors and identify whether the IoT connected device is employing expected (e.g., limited, authorized, etc.) techniques and connections to access data. If such a device attempts to expand data access beyond an authorized and/or recognized use, the security system will prevent such attempts, and/or route the commands and/or associated data to a diversion environment for additional processing).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Bakthavatchalam, Wright, and Shou before them, to modify the system to include a security feature wherein the one or more characteristics include a number of login attempts beyond a threshold number, a number of unsuccessful login attempts beyond a threshold number, a request for unauthorized data from an authorized user, or a request for an amount of data beyond a threshold amount. One would have been motivated using the same reasoning as in claim 1.
Regarding claim 4:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam further teaches wherein one or more results of comparing the one or more databases are cross-referenced to determine if the one or more characteristics is a match with any of the one or more databases (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131. As discussed below, flow management unit 131 may take various actions with respect to reported malware detections, including blocking malware-infested traffic flows and/or seizing information with respect to such flows to enable forensic or other advanced security measures [0026] As shown, rule buffer 151 receives rules from a source within control plane 122 (e.g., policy engine 133 of FIG. 1) and forwards or otherwise makes those rules available to rule search engine 155. Rule search engine 155 additionally receives inbound traffic from the data plane 120 and outputs a rule-search result (“RS Result”) to notify downstream functional blocks (e.g., flow management 131 unit of FIG. 1) of a malware detection event upon confirming a match between a rule (malware signature) and contents of the inbound traffic [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events).
Regarding claim 6:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam does not expressly teach wherein the threat characteristic recognition process is configured to run on a client device or via one or more networked computing assets.
However, Wright further teaches wherein the threat characteristic recognition process is configured to run on a client device or via one or more networked computing assets (Wright [0012] Additionally or alternatively, the security systems and methods are configured to operate in the absence of a networked connection and/or a primary power source. For example, software and/or hardware can be installed on a client device, which is designed to scan the software and/or hardware of the client device to detect and/or address threats. [0052] malware or other malicious content may exist on the client device and attempt to exploit data and/or functionality of the client device. In examples, the malicious payload(s) are prevented from being downloaded, either by having been identified as malware in advance (e.g., known malware, malware as identified by an agent, etc.), and/or by recognizing unusual behavior from the sender (as disclosed herein)).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Shou before them, to modify system to include feature for the security system to run on a client device or via one or more networked computing assets. One would have been motivated using the same reasoning as in claim 1.
Regarding claim 8:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam further teaches wherein the security system is connected to one or more [internet of things (IoT) enabled] devices [including a camera or a client device] (Bakthavatchalam [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”— e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”). While appliance 100 (which may constitute or be part of a firewall and/or carry out various other network functions such as traffic switching/routing, access control, deduplication, accounting, etc.) is depicted as having an Ethernet-based exterior interface (implementing at least physical (PHY) and media-access control (MAC) layers of the Ethernet stack as shown at 101) and a more generalized interior interface, various alternative or more specific network interfaces may be used on either or both sides of the appliance, including proprietary interfaces where necessary. Also, while separate (split) inbound and outbound traffic paths are shown, a single bidirectional path may be implemented with respect to either or both of the exterior and interior interfaces).
Bakthavatchalam does not expressly teach wherein the security system is connected to one or more internet of things (IoT) enabled devices including a camera or a client device.
However, Wright further teaches wherein the security system is connected to one or more internet of things (IoT) enabled devices including a camera or a client device (Wright [0012] Additionally or alternatively, the security systems and methods are configured to operate in the absence of a networked connection and/or a primary power source. For example, software and/or hardware can be installed on a client device, which is designed to scan the software and/or hardware of the client device to detect and/or address threats. There are particular advantages for devices that are configured for extended periods of sleep and/or passive and/or on-demand operation, such as smart speakers, device connected to the Internet of things (IoT), logistical waypoints, communications equipment, as a non-limiting list of examples).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Shou before them, to modify system to include feature for a security system to be connected to one or more internet of things (IoT) enabled devices including a camera or a client device. One would have been motivated using the same reasoning as in claim 1.
Regarding claim 9:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam further teaches wherein the security system [is operating on a quantum-enabled device or system] (Bakthavatchalam [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”)).
Bakthavatchalam does not expressly teach wherein the security system is operating on a quantum-enabled device or system.
However, Wright further teaches wherein the security system is operating on a quantum-enabled device or system (Wright [0113] FIGS. 3A and 3B provide a flowchart representative of example machine-readable instructions 300, which may be executed by the example security system 102 of FIG. 1, to implement data protection and authentication. The example instructions 300 may be stored in the memory 112 and/or one or more of the data sources 106, and executed by the processor(s) 110 of the security system 102. The example instructions 300 are described below with reference to the systems of FIG. 1. In some examples, the instructions 300 are executed in a quantum computing environment, and/or are configured to provide protection from threats generated from, associated with, transmitted by, and/or stored on a quantum computing platform).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Shou before them, to modify a system to include feature for a security system that can be operating on a quantum-enabled device or system. One would have been motivated using the same reasoning as in claim 1.
Regarding claim 10:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam further teaches wherein the security system [builds a machine learning algorithm] to identify the one or more characteristics (Bakthavatchalam [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”) [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131).
Bakthavatchalam does not expressly teach wherein the security system builds a machine learning algorithm to identify the one or more characteristics.
However, Wright further teaches wherein the security system builds a machine learning algorithm to identify the one or more characteristics (Wright [0035] … identifying abnormal activities on the client device in comparison to a baseline data (such as via AI monitoring) [0057] In some examples, the security system recognizes trends in user behavior, such that anomalous actions and/or traffic can be identified and investigated (e.g., by routing to a diversion environment). This can be implemented by historical tracking and/or application of AI tools to make connections (e.g., between trusted devices), recognize patterns (e.g., in user behavior), identify associated individuals and locations (e.g., within an organization, family, etc.). Thus, when an anomalous event occurs, the security system may evaluate the risk and determine suitable actions suitable to mitigate the risk [0066] In some example, an AI module will be programmed to identify and enforce any regulations, laws, compliances for the relevant industry [0068] … applying an AI module to identify patterns or keywords [0070] Advantageously, the disclosed systems and methods enable the end user to operate the device and/or access their data without impact. In other words, by use of an diversion environment, as well as continuous detection and update efforts of the AI Agents, the systems and methods protect both devices and data from potential threats, be they known, unknown, or emerging).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Shou before them, to modify a system to include feature for the security system to include a machine learning algorithm to identify the one or more characteristics. One would have been motivated using the same reasoning as in claim 1.
Regarding claim 11:
Regarding claim 11, Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Shou further teaches wherein the security system is configured to detect adversarial data and/or mutated inputs, adversarial reprogramming, and/or data poisoning attacks (Shou, Col. 3, lines 60-64: “Embodiments of the present invention provide a distributed trap-based defense for detecting malicious threats (e.g., intruders, malware, etc.) attempting to propagate quietly throughout any network, including closed enterprise and governmental networks”. Col. 4, lines 17-20: “Therefore, embodiments of the present invention enable the detection of sophisticated adversaries who have infiltrated a computer system or network, independent of the technical means used to achieve such infiltration.”).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the combined teaching of Bakthavatchalam and Wright to be further modified by Shou’s teaching detect adversarial data. One would have been motivated to have a secure system with the computing device that identifies the potential security threat by the DLP system in response to determining that at least one of the network traffic or the bait data deviates from expected values (see Shou, Col. 1, lines 55-59).
Regarding claim 13:
Regarding claim 13, Bakthavatchalam, Wright and Shou teach the security system of claim 1, as outlined above.
Shou further teaches wherein the data loss prevention agent is configured to operate independently of other algorithms and/or system protections (Shou, Col. 15, lines 66-67: “The host based DLP system 452 provides a secure and independent monitoring environment“. Fig. 5 and Col. 16, lines 28-30 and 51-56: “the host based DLP system 452 sends notifications to a network based DLP system when attempts to access sensitive data are detected. … FIG. 5 is a block diagram of a data loss prevention system 500, in accordance with one embodiment of the present invention. The DLP system 500 may be a host based DLP system (e.g., host based DLP system 115 of FIG. 1) or a network based DLP system (e.g., network based DLP system 132 of FIG. 1)” ).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the combined teaching of Bakthavatchalam and Wright to be further modified by Shou’s teaching of the DLP operating in an independent environment and fashion. One would have been motivated to have a secure system with the computing device that identifies the potential security threat by the independent DLP system in response to determining that at least one of the network traffic or the bait data deviates from expected values (see Shou, Col. 1, lines 55-59).
Regarding claim 14:
Regarding claim 14, Bakthavatchalam, Wright and Shou teach the security system of claim 1, as outlined above.
Shou further teaches wherein the data loss prevention agent is configured for threshold adjustment (Shou, Col. 18, lines 14-52: “Accordingly, DLP policies can be configured to detect any type of operation on bait data, such as attempts to intercept, modify, move, exfiltrate, etc. the bait data. This can enable the DLP system 500 to detect a threat much earlier than in a conventional DLP system that operates only on real user generated data. Sophisticated intruders may not actually attempt to exfiltrate sensitive data until after they have been monitoring a system for months. Embodiments of the present invention would detect such careful intruders. … In one embodiment, in which the DLP system 500 … The DLP policy 570 may indicate a threshold amount of acceptable deviation from the deception script. Script deviation monitor 585 determines whether the deviations are sufficient to violate the DLP policy 570 (e.g., whether they exceed the deviation threshold). The DLP policy 570 may indicate a threshold amount of acceptable deviation from the deception script. Script deviation monitor 585 determines whether the deviations are sufficient to violate the DLP policy 570 (e.g., whether they exceed the deviation threshold) … Network traffic tracker 505 determines whether the deviations are sufficient to violate the DLP policy 570”. Col. 21, lines 49-54: “At block 840, processing logic determines that a DLP policy has been violated, and performs one or more actions in accordance with a DLP response rule. Processing logic may generate an incident report, flag the endpoint device as being compromised, enable additional (e.g., more stringent) DLP policies, notify an administrator”; Examiner submits that more stringent DLP policies may be configured/adjusted for the DLP system/agent with notification to an administrator), and
Wright additionally teaches [wherein the data loss prevention] agent is configured for an administrator may make real-time threshold adjustments (Wright, [0013]-[0014]: “ The disclosed systems and methods empower people and businesses to proactively secure their data and devices in real-time. …. The disclosed systems and methods secure data with tools that are built to protect devices and data in real-time“. [0041]: “As used herein, “agents” may be any one or more of an AI agent, such as an AI agent defiant, and/or an AI agent detective, as a list of non-limiting examples. For instance, an AI Agent is powered by artificial intelligence to access and investigate any number of data environments”. [0064]: “… Such monitoring would happen in real-time, such that the user and/or an administrator could view activity as it occurs.“. Fig. 4C and [0121]: “As shown in block 400, an auditor/user/administrator enters into a resolution dashboard, where they are able to have full view of all the dashboard utilities“).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the combined teaching of Bakthavatchalam, Wright and Shou to be further modified with Shou’s data loss prevention agent configured for an administrator making real-time threshold adjustments, as suggested by Wright. One would have been motivated to do so based on the same reasoning as applied in claim 1.
Claims 5 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over by Bakthavatchalam et al. (US 20180114023 A1), hereafter Bakthavatchalam, Wright (US 20210021592 A1), hereafter Wright, and Shou (US Patent 8549643 B1), hereinafter Shou, and further in view of Parker (US 20130312092 A1), hereafter Parker.
Regarding claim 5:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam, Wright, and Shou do not expressly teach wherein a match generates a positive identification report that includes details from each of the one or more databases that contributed to positive identification.
However, Parker teaches wherein a match generates a positive identification report that includes details from each of the one or more databases that contributed to positive identification (Parker [0074] The intelligence engine 130 may be optionally adapted to provide an alert when a positive correlation between the EQD derived from that attack data and known adversaries. This alert is preferably triggered when the correlation achieves a predetermined probability threshold (e.g., the intelligence engine 130 calculates that there is a 95% probability that a known adversary is responsible for the attack data generated by the particular sensor node 150). The intelligence engine 130 can use any data correlation techniques known in the art for comparing the EQD to the AAD in the database 110 and determining a match probability. This alert can be either automatically sent to the owner of the sensor node 150 that provided the attack data that yielded the positive correlation, or it can be sent to an analyst who can then alert the owner of the sensor node 150 [0075] If a match is determined, based on a predetermined probability threshold, the intelligence engine preferably updates the profile (AAD) of the known adversary in the database 110 with the EQD derived from the attack data provided by the sensor node 150. If the intelligence engine 110 does not find a match based on the AAD in the database 110, then the intelligence engine 130 preferably established a new profile for an unknown adversary in the database 110 using the EQD from the unknown adversary).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright, Shou and Parker before them, to modify Bakthavatchalam, Wright and Shou that teaches a security system to include a feature function that a match generates a positive identification report that includes details from each of the databases that contributed to positive identification. One would have been motivated to quickly identify attacks against information technology assets as taught by Parker (see Parker [0028]).
Regarding claim 7:
Bakthavatchalam, Wright, and Shou teach the security system of claim 1, as outlined above.
Bakthavatchalam further teaches wherein [the threat characteristic recognition process further comprises updating a database of] the one or more databases when a comparison of the data results in a match (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131. As discussed below, flow management unit 131 may take various actions with respect to reported malware detections, including blocking malware-infested traffic flows and/or seizing information with respect to such flows to enable forensic or other advanced security measures [0026] As shown, rule buffer 151 receives rules from a source within control plane 122 (e.g., policy engine 133 of FIG. 1) and forwards or otherwise makes those rules available to rule search engine 155. Rule search engine 155 additionally receives inbound traffic from the data plane 120 and outputs a rule-search result (“RS Result”) to notify downstream functional blocks (e.g., flow management 131 unit of FIG. 1) of a malware detection event upon confirming a match between a rule (malware signature) and contents of the inbound traffic [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events).
The combination of Bakthavatchalam, Wright and Shou does not expressly teach wherein the threat characteristic recognition process further comprises updating a database of the one or more databases when a comparison of the data traffic results in a match.
However, Parker teaches wherein the threat characteristic recognition process further comprises updating a database of the one or more databases when a comparison of the data traffic results in a match (Parker [0034] The database 110 preferably includes profiles of known adversaries and/or profiles of previously analyzed cyber-attacks. The profiles preferably contain previously observed and/or theoretical quantitative data relating to adversary characteristics and behavior, as well as previously observed and/or theoretical quantitative data regarding the types of cyber attack mechanisms used. [0074] The intelligence engine 130 may be optionally adapted to provide an alert when a positive correlation between the EQD derived from that attack data and known adversaries. This alert is preferably triggered when the correlation achieves a predetermined probability threshold (e.g., the intelligence engine 130 calculates that there is a 95% probability that a known adversary is responsible for the attack data generated by the particular sensor node 150). The intelligence engine 130 can use any data correlation techniques known in the art for comparing the EQD to the AAD in the database 110 and determining a match probability. This alert can be either automatically sent to the owner of the sensor node 150 that provided the attack data that yielded the positive correlation, or it can be sent to an analyst who can then alert the owner of the sensor node 150 [0075] If a match is determined, based on a predetermined probability threshold, the intelligence engine preferably updates the profile (AAD) of the known adversary in the database 110 with the EQD derived from the attack data provided by the sensor node 150. If the intelligence engine 110 does not find a match based on the AAD in the database 110, then the intelligence engine 130 preferably