Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Claims 1-20 are pending
Priority
This application is a 371 of PCT/US2023/025721 06/20/2023 which claims the benefit of PRO 63/353,769 06/20/2022. Therefore, the effective filing date of this application is 06/20/2022.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: Figure 2 reference number 220. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The abstract of the disclosure is objected to because the abstract of the disclosure does not commence on a separate sheet in accordance with 37 CFR 1.52(b)(4) and 1.72(b). A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 11/19/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner.
Double Patenting
No double patenting rejection warranted at the time of this office action.
Claim Objections
Claim 10 is objected to because of the following informalities: this application recites of the acronym netJSON. For the purpose of examination examiner is interpreting this limitation as net JavaScript Object Notation (netJSON). Appropriate correction is required.
Claims 17-20 recite of “The system”. However, independent claim 16 recites of “A computer-based system”. Examiner suggests amending claims 17-20 to recite “The computer-based system”
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 5-7, and 13-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 5 and 6 recites the limitation "the communication links". There is insufficient antecedent basis for this limitation in the claim. Claim 3 only recites of a … communication link. For the purpose of examination examiner is interpreting this limitation as a singular communication link “… the communication link”. Appropriate correction is required.
Claim 7 depends on claim 6. Therefore, claim 7 also inherits the rejection.
Claims 13-15 recites the limitation " the AI-based sequential decision-making optimization". There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination examiner is interpreting this limitation as “… the sequential decision-making optimization”. Appropriate correction is required.
Claim 14 recites the limitation " the skill level". There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination examiner is interpreting this limitation as “… a skill level”. Appropriate correction is required.
Claim 16 recites the limitation " the cybersecurity threat information". There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination examiner is interpreting this limitation as “… the cyberattack information”. Appropriate correction is required.
Claims 17-20 depend on claim 16. Therefore, they also inherit the rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because they directed to an abstract idea.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites of a method for identifying and analyzing potential cybersecurity threats in an engineered system comprising: storing information relating to cybersecurity in a cybersecurity information layer; performing a functional simulation representative of the engineered system, based in part on the information stored in the cybersecurity information layer; and performing a sequential decision-making optimization to identify a most impactful cyberattack vector with respect to a key performance indicator (KPI) of interest.
The limitation of identifying and analyzing potential cybersecurity threats in an engineered system, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually identify and analyze potential cybersecurity threats.
The limitation of storing information relating to cybersecurity in a cybersecurity information layer, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually store information relating to cybersecurity in a cybersecurity information layer.
The limitation of performing a functional simulation representative of the engineered system, based in part on the information stored in the cybersecurity information layer, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually perform functional simulation representative of an engineered system.
The limitation of performing a sequential decision-making optimization to identify a most impactful cyberattack vector with respect to a key performance indicator (KPI) of interest, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually perform sequential decision-making optimization to identify a most impactful cyberattack vector with respect to a key performance indicator (KPI) of interest.
This judicial exception is not integrated into a practical application. The claim recites of a limitation of “performing a sequential decision-making optimization to identify a most impactful cyberattack vector with respect to a key performance indicator (KPI) of interest”. This limitation is used to generally apply sequential decision-making optimization to identify a most impactful cyberattack vector without placing any limits on how the “optimization” functions and what is the outcome of identifying a most impactful cyberattack vector. Merely identifying a most impactful cyberattack vector does not integrate the abstract idea into a practical application. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is directed to an abstract idea. The claim is not patent eligible.
Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the cybersecurity information layer comprises topology and device information corresponding to the engineered system, and information about potential cybersecurity attacks that may be performed affecting a device or communication link of the engineered system. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine topology and device information corresponding to the engineered system.
Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the cybersecurity information layer further comprises information about measures of an associated effort required for implementation of each of the potential cybersecurity attacks affecting a device or communication link. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine information about measures of an associated effort required for implementation of each of the potential cybersecurity attacks.
Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the sequential decision making optimization is implemented using artificial intelligence (AI) based techniques. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually implement sequential decision making optimization using artificial intelligence.
Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein one of the communication links is a logical link between a first device and a second device. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine a communication links is a logical link.
Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein one of the communication links is a physical network link between a first device and a second device. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine a communication links is a physical network link.
Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the physical network link connects the first device and a router. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine a physical network link connects the first device and a router.
Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of iteratively performing the sequential decision-making optimization to produce a sequence of attacker steps that generates a maximum disruption to operation of the engineered system. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually iteratively perform sequential decision-making optimization to produce a sequence of attacker steps that generates a maximum disruption to operation of the engineered system.
Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of measuring and optimizing the sequential decision-making optimization based on configurable key performance indicators (KPIs) associated with operation of the engineered system. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually measure and optimize the sequential decision-making optimization based on configurable key performance indicators.
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of storing in the cybersecurity information layer, information about the engineered system's topology encoded as netJSON format. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually store in the cybersecurity information layer, information about the engineered system's topology encoded as netJSON format.
Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein a device in the engineered system's topology adversary actions are encoded in layers including an exposure layer, an exploitability layer and an end-effect layer. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually encode engineered system's topology adversary actions in layers including an exposure layer, an exploitability layer and an end-effect layer.
Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein a communication link in the engineered system's topology adversary actions are encoded in layers including an exposure layer and an end-effect layer. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine communication link in the engineered system's topology adversary actions are encoded in layers including an exposure layer and an end-effect layer.
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of considering during the AI-based sequential decision-making optimization, an associated effort required for each possible action taken by an attacker. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine an associated effort required for each possible action taken by an attacker.
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of considering during the AI-based sequential decision-making optimization, an attacker profile representative of the skill level of an attacker. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine an attacker profile representative of the skill level of an attacker.
Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the AI-based sequential decision-making optimization is performed using a Monte Carlo Tree Search. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine AI-based sequential decision-making optimization is performed using a Monte Carlo Tree Search.
Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Furthermore, this claim recites of features similar to that of claim1. Therefore claim 16 is rejected in a similar manner as in the rejection of claim 1. As for the limitation of perform an artificial intelligence (AI) based sequential decision-making optimization to identify a sequence of attacker actions. This limitation as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually perform an artificial intelligence (AI) based sequential decision-making optimization to identify a sequence of attacker actions.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic statement such as “computer-based system”, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. The claim recites of a limitation of “perform an artificial intelligence (AI) based sequential decision-making optimization to identify a sequence of attacker actions”. This limitation is used to generally apply AI based decision-making optimization to identify a sequence of attacker actions without placing any limits on how the “optimization” functions and what is the outcome of identifying a sequence of attacker actions. Merely identifying a sequence of attacker actions does not integrate the abstract idea into a practical application. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In particular, the claim only recites one additional element of computer processor implemented system. The “computer processor” recited at a high-level of generality (i.e., as a generic computer processor performing the method) such that it amounts no more than mere instructions to apply the exception using a generic computer processor. Mere instructions to apply an exception using a generic computer processor cannot provide an inventive concept. The claim is not patent eligible.
Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of identify the sequence of attacker actions that represent a most impactful attack on the engineered system based on a key performance indicator (KPI) of interest. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually identify sequence of attacker actions that represent a most impactful attack on the engineered system based on a key performance indicator.
Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the cyberattack information includes information relating to a topology and devices in the engineered system from a computer network perspective. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine cyberattack information includes information relating to a topology and devices in the engineered system.
Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the AI-based sequential decision-making optimization is performed using a Monte Carlo Tree Search. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually implement AI-based sequential decision-making optimization using a Monte Carlo Tree Search.
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of for each possible attack affecting a device or communication link of the engineered system, computing a measure of associated effort required to carry out each possible attack. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine for each possible attack affecting a device or communication link of the engineered system, a measure of associated effort required to carry out each possible attack.
The dependent claims 2-15, and 17-20 are directed to abstract ideas and do not include additional elements that are sufficient to amount to significantly more than the judicial exception. This judicial exception is not integrated into a practical application. Therefore the claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9, 13, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over SINGH (US-20170223037-A1) in view of KUPPANNA (US-20200021609-A1), hereinafter SINGH-KUPPANNA.
Regarding claim 1, SINGH teaches “A method for identifying and analyzing potential cybersecurity threats in an engineered system comprising: ([SINGH, para. 0263] “integration of control system networks with public and corporate networks increases the accessibility of control system vulnerabilities. These vulnerabilities can expose all levels of the industrial control system network architecture to complexity-induced error, adversaries and a variety of cyber threats, including worms and other malware.”) ([SINGH, para. 0003] “Provided are methods, network devices, and computer-program products for obtaining targeted threat intelligence using a high-interaction network. In various implementations, targeted threat intelligence includes using a network device in a network to receive suspect network traffic. Suspect network traffic can include network traffic identified as potentially causing harm to the network.”) storing information relating to cybersecurity in a cybersecurity information layer; ([SINGH, para. 0194] “The security device 660 may also maintain historic information. For example, the security device 660 may provide snapshots of the network 600 taken once a day, once a week, or once a month. The security device 660 may further provide a list of devices that have, for example, connected to the wireless signal in the last hour or day, at what times, and for how long.”) ([SINGH, para. 0317] “the deception profiler 1410 can receive information associated with the site network to use with the engines described above. For example, the deception profiler 1410 can receive a network topology 1420. The network topology 1420 can include network information associated with one or more network devices in the site network. ”) ([SINGH, para. 0318] “The network information can also include a number and distribution of assets in a subnetwork in relation to the site network. … In some implementations, the network topology 1420 can be determined using an active directory.”) ([SINGH, para. 0320] “The deception profiler 1410 can also receive historical attack information 1440. The source of the historical attack information 1440 can depend on the type of system implemented in the network. For example, historical attack information 1440 can be received from a security operations center (SOC)”) ([SINGH, para. 0274] “The deception profiler 1230 may receive network information 1214 from the site network. This network information 1214 may include information such as subnet addresses, IP addresses in use, an identity and/or configuration of devices in the site network”) performing a functional simulation representative of the engineered system, based in part on the information stored in the cybersecurity information layer; and ([SINGH, para. 0094] “Using the deception center 108 the system 100 can scan the site network 104 and determine the topology of the site network 104. The deception center 108 may then determine devices to emulate with security mechanisms, including the type and behavior of the device. The security mechanisms may be selected and configured specifically to attract the attention of network attackers.”) ([SINGH, para. 0275] “The deception profiler 1230 in this example includes a location engine 1232, a density engine 1234, a configuration engine 1236 … The configuration engine 1236 may determine how each deception mechanism is to be configured, and may provide configurations to the network emulator 1220. The scheduling engine 1238 may determine when a deception mechanism should be deployed and/or activated.”) ([SINGH, para. 0277] “The deployment strategy may include instructing the network emulator 1220 to add, remove, and/or modify emulated network devices in the emulated network 1216, and/or to modify the deception mechanisms projected into the site network.”) ([SINGH, para. 0299] “When the network emulator 1320 receives suspect network traffic addressed to an address deception, the network emulator 1320 may initiate a low-interaction deception mechanism 1328 a-1328 d, to respond to the network traffic. … The low-interaction deceptions 1328 a-1328 d are emulated systems that may be capable of receiving network traffic for multiple MAC and IP address pairs. The low-interaction deceptions 1328 a-1328 d may have a basic installation of an operating system, and typically have a full suite of services that may be offered by real system with the same operating system.”) performing a sequential decision-making optimization to identify a most impactful cyberattack vector … ([SINGH, para. 0345] “An attack pattern detector 1506 may collect data 1504 a-1504 c from the site network 1502 and/or an emulated network 1516. This collected data 1504 a-1504 c may come from various sources, such as servers, computers devices, and network infrastructure devices in the site network 1502, and from previously-deployed deception mechanisms”) ([SINGH, para. 0363] “Additionally, in some cases, the process 1600 may identify multiple attack patterns simultaneously or successively, all of which may be provided to the process 1710 of FIG. 17A, or some of which may be provided while the rest are set aside for later processing. The process 1710 may, at step 1792, get the next highest ranked attack pattern. The ranking may indicate a seriousness, importance, urgency, or otherwise indicate an order in which the attack patterns should be addressed.”) ([SINGH, para. 0364] “For the next highest ranked attack pattern, at step 1794, the process 1710 generates a dynamic deployment strategy.”)
However, SINGH does not teach “most impactful cyberattack vector with respect to a key performance indicator (KPI) of interest”.
In analogous teaching teaches KUPPANNA “most impactful cyberattack vector with respect to a key performance indicator (KPI) of interest” ([KUPPANNA , para. 0178] “FIG. 9 illustrates a table 900 including an exemplary set of threat information and action information in the form of records containing information correlating threat types, threat levels and corresponding actions to mitigate threats of the identified threat type and level. … The threat level is determined based on the key performance indicators regarding the threat and its current effect on the system and/or subscriber(s).”)
Thus, given the teaching of KUPPANNA, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of most impactful cyberattack vector with respect to a key performance indicator (KPI) of interest by KUPPANNA into the teaching of method for identifying and analyzing potential cybersecurity threats in an engineered system by SINGH. One of ordinary skill in the art would have been motivated to do so because KUPPANNA recognizes the need to efficiently detect anomalies ([KUPPANNA , para. 0003] “Among what is missing in the state of the art for Unified Communications is a holistic system that monitors the Unified Communications network, detects anomalies”) ([KUPPANNA , para. 0004] “it is apparent that there is a need for a technological solution to how to effectively, efficiently and in a cost-efficient manner monitor, detect and/or mitigate threats and/or anomalies in networks”) ([KUPPANNA , para. 0005] “Various methods and apparatus are described which allow for a combination of automated and operator controlled responses to threats.”)
Regarding claim 16, this claim recites of a computer-based system that performs the steps of method claim 1. Therefore, claim 16 is rejected in a similar manner as in the rejection of claim 1. SINGH further teaches “an artificial intelligence (AI) based sequential decision-making optimization to identify a sequence of attacker actions.” ([SINGH, para. 0389] “The contents of the analysis database 1840 may be provided to the analytic engine 1818 for detail analysis. … Generally, each analysis engine 1940 may apply one or more of heuristic algorithms, probabilistic algorithms, machine learning algorithms, and/or pattern matching algorithms, in addition to emulators, to detect whether data (e.g., files, email, network packets, etc.) from the analysis database 1930 is malicious. ) ([SINGH, para. 0387] “Lateral movement 1832 may be captured, for example, as a trace of activity among multiple devices emulated in the emulated network 1816. … Lateral movement 1832 data may put this information together and provide a cohesive description of an attack.”) ([SINGH, para. 0738] “Accesses originating from the user workstation 4876 and connecting to other emulated systems may be called lateral movement. Lateral movement is a strong indicator of malicious activity. The pattern of lateral movement may also be interesting for understanding the scope and nature of an attack. Hence, the infiltration may be allowed to continue for some time, in order to learn as much as possible about the attacker.”)
Regarding claim 2, SINGH-KUPPANNA teaches all limitations of claim 1. SINGH further teaches “wherein the cybersecurity information layer comprises topology and device information corresponding to the engineered system, and information about potential cybersecurity attacks that may be performed affecting a device or communication link of the engineered system. ([SINGH, para. 0181] “Once the security device 660 has learned the topology and/or activity of the network 600, the security device 660 may be able to provide deception-based security for the network 600.”) ([SINGH, para. 0317] “the deception profiler 1410 can receive information associated with the site network to use with the engines described above. For example, the deception profiler 1410 can receive a network topology 1420. The network topology 1420 can include network information associated with one or more network devices in the site network. ”) ([SINGH, para. 0318] “The network information can also include a number and distribution of assets in a subnetwork in relation to the site network. … In some implementations, the network topology 1420 can be determined using an active directory.”) ([SINGH, para. 0320] “The deception profiler 1410 can also receive historical attack information 1440. The source of the historical attack information 1440 can depend on the type of system implemented in the network.”) ([SINGH, para. 0326] “some implementations, the density engine 1414 can use the network topology 1420, the machine information 1430, and/or the historical attack information 1440 to determine densities, summary statistics, or a combination of information.”)
Regarding claim 3, SINGH-KUPPANNA teaches all limitations of claim 2. SINGH further teaches “wherein the cybersecurity information layer further comprises information about measures of an associated effort required for implementation of each of the potential cybersecurity attacks affecting a device or communication link. ([SINGH, para. 0076] “Threats to a network can include active attacks, where an attacker interacts or engages with systems in the network to steal information or do harm to the network. An attacker may be a person, or may be an automated system. Examples of active attacks include denial of service (DoS) attacks, distributed denial of service (DDoS) attacks, spoofing attacks, “man-in-the-middle” attacks, attacks involving malformed network requests (e.g. Address Resolution Protocol (ARP) poisoning, “ping of death,” etc.), buffer, heap, or stack overflow attacks, and format string attacks, among others.”) ([SINGH, para. 0105] “a network threat detection engine 140 may monitor activity in the emulated network 116, and look for attacks on the site network 104. For example, the network threat detection engine 140 may look for unexpected access to the emulated computing systems in the emulated network 116.”)
Regarding claim 4, SINGH-KUPPANNA teaches all limitations of claim 1. SINGH further teaches “the sequential decision making optimization is implemented using artificial intelligence (AI) based techniques. ([SINGH, para. 0388] “As noted above, the data 1820 extracted from the emulated network 1816 may be accumulated in an analysis database 1840.“) ([SINGH, para. 0389] “The contents of the analysis database 1840 may be provided to the analytic engine 1818 for detail analysis. … Generally, each analysis engine 1940 may apply one or more of heuristic algorithms, probabilistic algorithms, machine learning algorithms, and/or pattern matching algorithms, in addition to emulators, to detect whether data (e.g., files, email, network packets, etc.) from the analysis database 1930 is malicious. )
Regarding claim 5, SINGH-KUPPANNA teaches all limitations of claim 3. SINGH further teaches “wherein one of the communication links is a logical link between a first device and a second device.” ([SINGH, para. 0274] “The deception profiler 1230 may receive network information 1214 from the site network. This network information 1214 may include information such as subnet addresses, IP addresses in use, an identity and/or configuration of devices in the site network, and/or profiles of usage patterns of the devices in the site network.”) ([SINGH, para. 0317] “For example, the deception profiler 1410 can receive a network topology 1420. The network topology 1420 can include network information associated with one or more network devices in the site network. For example, the network information can include number of subnetworks that are in the site network and the network devices that are in each subnetwork.”)
Regarding claim 6, SINGH-KUPPANNA teaches all limitations of claim 3. SINGH further teaches “wherein one of the communication links is a physical network link between a first device and a second device.” ([SINGH, para. 0096] “The site network 104 is where the networking devices and users of the an organizations network may be found. The site network 104 may include network infrastructure devices, such as routers, switches hubs, repeaters, wireless base stations, and/or network controllers, among others. The site network 104 may also include computing systems, such as servers, desktop computers, laptop computers, tablet computers, personal digital assistants, and smart phones, among others.”) ([SINGH, para. 0120] “Directly connected, in this example, can mean that the deception center 208 is connected to a router, hub, switch, repeater, or other network infrastructure device that is part of the site network 204.”) ([SINGH, para. 0167] “The wired network can be extended using routers, switches, and/or hubs. In many cases, wired networks may be interconnected with wireless networks”)
Regarding claim 7, SINGH-KUPPANNA teaches all limitations of claim 6. SINGH further teaches “wherein the physical network link connects the first device and a router. ([SINGH, para. 0120] “Directly connected, in this example, can mean that the deception center 208 is connected to a router, hub, switch, repeater, or other network infrastructure device that is part of the site network 204.”) ([SINGH, para. 0167] “The wired network can be extended using routers, switches, and/or hubs. In many cases, wired networks may be interconnected with wireless networks”)
Regarding claim 8, SINGH-KUPPANNA teaches all limitations of claim 1. SINGH further teaches “further comprising: iteratively performing the sequential decision-making optimization to produce a sequence of attacker steps that generates a maximum disruption to operation of the engineered system. ([SINGH, para. 0387] “Lateral movement 1832 may be captured, for example, as a trace of activity among multiple devices emulated in the emulated network 1816. … Lateral movement 1832 data may put this information together and provide a cohesive description of an attack.”) ([SINGH, para. 0738] “Accesses originating from the user workstation 4876 and connecting to other emulated systems may be called lateral movement. Lateral movement is a strong indicator of malicious activity. The pattern of lateral movement may also be interesting for understanding the scope and nature of an attack. Hence, the infiltration may be allowed to continue for some time, in order to learn as much as possible about the attacker.”)
Regarding claim 9, SINGH-KUPPANNA teaches all limitations of claim 8. KUPPANNA further teaches “further comprising: measuring and optimizing the sequential decision-making optimization based on configurable key performance indicators (KPIs) associated with operation of the engineered system.” ([KUPPANNA , para. 0178] “The operator previous indicated action is the stored mitigation action indicated to be taken by the operator in response to the corresponding threat type having the associated threat level. The default action is typically an action identified by the user prior to the system experiencing a threat of the corresponding type and level … If no default action is specified and no operator previous indicated action is specified for the threat type and threat level, the system will take no action but continue to operate and monitor the threat and key performance indicators related to the threat. … The threat level is determined based on the key performance indicators regarding the threat and its current effect on the system and/or subscriber(s).”) ([KUPPANNA , para. 0179] “Row 916 of table 900 associates the threats of type 1 (entry column 902, row 916) with a threat level of level 2 (entry column 904, row 916) with operator previous indicated action of block traffic flow corresponding to the detected threat (entry column 906, row 916), action in absence of operator input of block traffic flow corresponding to the detected threat (entry column 908, row 916) and a default action of restrict traffic flow (entry column 910, row 916).”)
The same motivation to modify SINGH with KUPPANNA as in the rejection of claim 1 applies.
Regarding claim 13, SINGH-KUPPANNA teaches all limitations of claim 1. SINGH further teaches “further comprising: considering during the AI-based sequential decision-making optimization, an associated effort required for each possible action taken by an attacker. ([SINGH, para. 0076] “An attacker may be a person, or may be an automated system. Examples of active attacks include denial of service (DoS) attacks, distributed denial of service (DDoS) attacks, spoofing attacks, “man-in-the-middle” attacks, attacks involving malformed network requests (e.g. Address Resolution Protocol (ARP) poisoning, “ping of death,” etc.), buffer, heap, or stack overflow attacks, and format string attacks, among others.”) ([SINGH, para. 0298] “when an attacker is mapping a network and looking for possible points to attack. For example, an attacker may generate queries for all IP addresses in a broadcast domain”) ([SINGH, para. 0389] “each analysis engine 1940 may apply one or more of heuristic algorithms, probabilistic algorithms, machine learning algorithms, and/or pattern matching algorithms, in addition to emulators, to detect whether data (e.g., files, email, network packets, etc.) from the analysis database 1930 is malicious.”)
Regarding claims 15 and 19, SINGH-KUPPANNA teaches all limitations of claims 1 and 16. SINGH further teaches “wherein the AI-based sequential decision-making optimization is performed using a Monte Carlo Tree Search. ([SINGH, para. 0467] “The attack trajectory data structure 2605 can be generated by using a modified depth first search algorithm. The modified depth first search algorithm can analyze all of the machine interactions from each machine before stepping deeper into the adjacency data structure 2511. Other search algorithms can be used, including breadth first search and Monte Carlo tree search.”) ([SINGH, para. 0389] “Each analysis engine 1940 may further include sub-modules and plugins, which are also able to apply heuristic, probabilistic, machine learning, and/or pattern matching algorithms, as well as emulators, to determine whether some data is malicious.”)
Regarding claim 17, SINGH-KUPPANNA teaches all limitations of claim 16. SINGH further teaches “identify the sequence of attacker actions that represent a most impactful attack on the engineered system ([SINGH, para. 0354] “Should an apparent attacker attempt a lateral movement from the deception mechanism 1520 a where he was detected to a production system, the apparent attacker may instead be logged into a security mechanism 1520 b-1520 c that mimics that production server. The apparent attacker may not be aware that his activity has been contained to the emulated network 1516.”) ([SINGH, para. 0587] “Network analysis also looks for lateral movement that may result from suspect network traffic. Lateral movement occurs when an attack on the high-interaction network 3616 moves from one device in the network to another. Lateral movement may involve malware designed to spread between network devices, and/or infiltration of the network by an outside entity.”)
However, SINGH does not teach “impactful attack on the engineered system based on a key performance indicator (KPI) of interest.”.
In analogous teaching teaches KUPPANNA “most impactful cyberattack vector with respect to a key performance indicator (KPI) of interest” ([KUPPANNA , para. 0178] “FIG. 9 illustrates a table 900 including an exemplary set of threat information and action information in the form of records containing information correlating threat types, threat levels and corresponding actions to mitigate threats of the identified threat type and level. … The threat level is determined based on the key performance indicators regarding the threat and its current effect on the system and/or subscriber(s).”)
The same motivation to modify SINGH with KUPPANNA as in the rejection of claim 1 applies.
Regarding claim 18, SINGH-KUPPANNA teaches all limitations of claim 16. SINGH further teaches “wherein the cyberattack information includes information relating to a topology and devices in the engineered system from a computer network perspective.” ([SINGH, para. 0181] “Once the security device 660 has learned the topology and/or activity of the network 600, the security device 660 may be able to provide deception-based security for the network 600.”) ([SINGH, para. 0317] “the deception profiler 1410 can receive information associated with the site network to use with the engines described above. For example, the deception profiler 1410 can receive a network topology 1420. The network topology 1420 can include network information associated with one or more network devices in the site network. ”) ([SINGH, para. 0318] “The network information can also include a number and distribution of assets in a subnetwork in relation to the site network. … In some implementations, the network topology 1420 can be determined using an active directory.”)
Regarding claim 20, SINGH-KUPPANNA teaches all limitations of claim 16. SINGH further teaches “for each possible attack affecting a device or communication link of the engineered system, computing a measure of associated effort required to carry out each possible attack.” ([SINGH, para. 0076] “Threats to a network can include active attacks, where an attacker interacts or engages with systems in the network to steal information or do harm to the network. An attacker may be a person, or may be an automated system. Examples of active attacks include denial of service (DoS) attacks, distributed denial of service (DDoS) attacks, spoofing attacks, “man-in-the-middle” attacks, attacks involving malformed network requests (e.g. Address Resolution Protocol (ARP) poisoning, “ping of death,” etc.), buffer, heap, or stack overflow attacks, and format string attacks, among others.”) ([SINGH, para. 0107] “activity captured in the emulated network 116 may be analyzed using a targeted threat analysis engine 160. The threat analysis engine 160 may examine data collected in the emulated network 116 and reconstruct the course of an attack. For example, the threat analysis engine 160 may correlate various events seen during the course of an apparent attack, including both malicious and innocuous events, and determine how an attacker infiltrated and caused harm in the emulated network 116.”)
Claims 10-12 is rejected under 35 U.S.C. 103 as being unpatentable over SINGH-KUPPANNA in view of CAPOANO (“NetJSON: data interchange format for networks”), hereinafter SINGH-KUPPANNA-CAPOANO.
Regarding claim 10, SINGH-KUPPANNA teaches all limitations of claim 1. However, SINGH-KUPPANNA does not teach “storing in the cybersecurity information layer, information about the engineered system's topology encoded as netJSON format”
In analogous teaching CAPOANO teaches “storing in the cybersecurity information layer, information about the engineered system's topology encoded as netJSON format. ([CAPOANO, abstract] “NetJSON is a data interchange format based on JavaScript Object Notation (JSON) designed to describe the basic building blocks of layer2 and layer3 networking. It defines several types of JSON objects and the manner in which they are combined to represent a network: configuration of devices, monitoring data, network topology and routing information.”) ([CAPOANO, Introduction] “The format is concerned with the basic building blocks that compose a computer network (devices, monitoring data, routing, topology)”).
Thus, given the teaching of CAPOANO, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of netJSON by CAPOANO into the teaching of method for identifying and analyzing potential cybersecurity threats in an engineered system by SINGH-KUPPANNA. One of ordinary skill in the art would have been motivated to do so because CAPOANO recognizes the need for netJSON ([CAPOANO, Motivations] “There exist many libraries and web apps for networking, but it is very hard to make them interoperable, that is, making them talk and understand one another with minimum effort. … By defining common data structures we can allow developers to focus on their goals instead of having to struggle with the differences of each vendor, firmware, routing protocol or community.”) ([CAPOANO, Introduction] “These concepts have been streamlined to encourage interoperability between network centric web applications using JSON.”)
Regarding claim 11, SINGH-KUPPANNA-CAPOANO teaches all limitations of claim 10. SINGH further teaches “wherein a device in the engineered system's topology adversary actions are encoded in layers including an exposure layer, ([SINGH, para. 0354] “The network security system 100 may deploy deceptive security mechanisms in a targeted and dynamic fashion. Using the deception center 108 the system 100 can scan the site network 104 and determine the topology of the site network 104.”) ([SINGH, para. 0354] “Should an apparent attacker attempt a lateral movement from the deception mechanism 1520 a where he was detected to a production system, the apparent attacker may instead be logged into a security mechanism 1520 b-1520 c that mimics that production server. The apparent attacker may not be aware that his activity has been contained to the emulated network 1516.”) an exploitability layer ([SINGH, para. 0587] “Network analysis also looks for lateral movement that may result from suspect network traffic. Lateral movement occurs when an attack on the high-interaction network 3616 moves from one device in the network to another. Lateral movement may involve malware designed to spread between network devices, and/or infiltration of the network by an outside entity.”) and an end-effect layer. ([SINGH, para. 0587] “The high-interaction network 3616 may also see an attack 3688 on the compute servers 3670, using the stolen credentials, to take the compute servers 3670 offline. Each of these attacks 3686, 3688 may be considered lateral movement of an attack 3692 that started at the user workstations 3676.”)
Regarding claim 12, SINGH-KUPPANNA-CAPOANO teaches all limitations of claim 10. SINGH further teaches “wherein a communication link in the engineered system's topology adversary actions are encoded in layers including an exposure layer ([SINGH, para. 0354] “The network security system 100 may deploy deceptive security mechanisms in a targeted and dynamic fashion. Using the deception center 108 the system 100 can scan the site network 104 and determine the topology of the site network 104.”) ([SINGH, para. 0354] “Should an apparent attacker attempt a lateral movement from the deception mechanism 1520 a where he was detected to a production system, the apparent attacker may instead be logged into a security mechanism 1520 b-1520 c that mimics that production server. The apparent attacker may not be aware that his activity has been contained to the emulated network 1516.”) and an end-effect layer. ([SINGH, para. 0587] “The high-interaction network 3616 may also see an attack 3688 on the compute servers 3670, using the stolen credentials, to take the compute servers 3670 offline. Each of these attacks 3686, 3688 may be considered lateral movement of an attack 3692 that started at the user workstations 3676.”)
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over SINGH-KUPPANNA in view of LAIDLAW (“US-20150163242-A1”).
Regarding claim 14, SINGH-KUPPANNA teaches all limitations of claim 13. However, SINGH-KUPPANNA does not teach “considering during the AI-based sequential decision-making optimization, an attacker profile representative of the skill level of an attacker”
In analogous teaching LAIDLAW teaches “considering during the AI-based sequential decision-making optimization, an attacker profile representative of the skill level of an attacker.” ([LAIDLAW, para. 0161] “The present invention provides a means to increase a system administrator's understanding of the seriousness of an attack on a target computing environment 401 and provide a methodology for determining the likely profile of an attacker; e.g. from an amateur with basic hacking skills to a sophisticated attacker incorporating many attack vectors. It is the constantly changing modes of attack within the control environment 402 that will provide the useful data to help continually improve our understanding of an attacker.”) ([LAIDLAW, para. 0127] “Any suitable artificially intelligent machine learning or data mining techniques, such as clustering, classification or associative rule generation, which can be used to identify patterns in the attacks based on packet data or attributes derived from packet data, can be used in the predictive model generation engine 413.”).
Thus, given the teaching of LAIDLAW, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of an attacker profile representative of the skill level of an attacker by LAIDLAW into the teaching of method for identifying and analyzing potential cybersecurity threats in an engineered system by SINGH-KUPPANNA. One of ordinary skill in the art would have been motivated to do so because LAIDLAW recognizes the need to protect computer resources ([LAIDLAW, para. 0012] “It would therefore be desirable to provide a mechanism to facilitate the administrators of secure computing environments in effectively policing access to computer resources.”) ([LAIDLAW, para. 0021] “Thus advantages that may be provided by embodiments of the present invention include: … Threat assessment improves decision making of system administrators and reduces data deluge—allowing attacks to be prioritised and treated accordingly”)
Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
MOMOT (US-20160197943-A1): This prior art teaches of methods and media are shown for generating a profile score for an attacker involving a detection unit configured to identify one or more malicious code elements in a payload, a weighting unit configured to associate a weighting value with each identified malicious code element, and a classification unit configured to sum the weighting values associated with the identified malicious code elements and associate a classification with the attacker based on scored based the weighting values. Some examples also involve applying a model to weighting values for identified malicious code elements that may include a Markov model, a model based on apparent skill, a model based on resourcing required by the malicious code, or a model based on behavior patterns.
DEARDORFF (US-11418528-B2): This prior art teaches of methods, systems, and processes to facilitate and perform dynamic best path determination for penetration testing. An action path that includes a kill chain that involves performance of exploit actions for a phase of a penetration test is generated by identifying the exploit actions based on a penetration parameter, a detection parameter, and/or a time parameter associated with the exploit actions. Performance of the identified exploit actions permits successful completion of the phase of the penetration test and designates the action path for inclusion as part of a best path for the penetration test.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFAQ ALI whose telephone number is (571)272-1571. The examiner can normally be reached Mon - Fri 7:30am - 5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at (571) 270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.A./
02/01/2026
/AFAQ ALI/Examiner, Art Unit 2434
/NOURA ZOUBAIR/Primary Examiner, Art Unit 2434