Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
MPEP 2112 Section III.
Where applicant claims a composition in terms of a function, property or characteristic and the composition of the prior art is the same as that of the claim but the function is not explicitly disclosed by the reference, the examiner may make a rejection under both 35 U.S.C. 102 and 103, expressed as a 102/ 103 rejection. "There is nothing inconsistent in concurrent rejections for obviousness under 35 U.S.C. 103 and for anticipation under 35 U.S.C. 102." In re Best, 562 F.2d 1252, 1255 n.4, 195 USPQ 430, 433 n.4 (CCPA 1977). This same rationale should also apply to product, apparatus, and process claims claimed in terms of function, property or characteristic. Therefore, a 35 U.S.C. 102/ 103 rejection is appropriate for these types of claims as well as for composition claims.
Claims 1, 5, 9, 10, 11, 15, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hassanzadeh (US Pub. No. 2020/0137104 A1) in view of McGrew (US Pub. No. 2024/0333765 A1).
Per claim 1, Hassanzadeh (US Pub. No. 2020/0137104 A1) suggests a system, comprising: a processor configured to (see Hassanzadeh para 0143 - 0146): receive a graph of a network that includes (reads on a graph that is representative of an enterprise network, see Hassanzadeh para 0005) one or more vulnerabilities and/or one or more risk findings (reads on an attack graph representing all possible impacts and vulnerabilities, network/system configurations and possible impacts on a respective target, see Hassanzadeh para 0008 and 0049 – 0051);
contextualize (reads on identifies critical assets and critical paths and provides context about which elements are more important, see Hassanzadeh para 0005, 0050, 0051 and 0064) the graph of the network (reads on a graph that is representative of an enterprise network, see Hassanzadeh para 0005); and a memory coupled to the processor and configured to provide the processor with instructions (see Hassanzadeh para 0143 - 0146). The prior art of record is silent on explicitly stating generate one or more prompts and input the contextualized graph to a Large-Language Model (LLM); and generate an output that summarizes the contextualized graph using the LLM.
[0005] In some implementations, actions include providing, by a security platform, graph data defining a graph that is representative of an enterprise network, the graph comprising nodes and edges between nodes, a set of nodes representing respective assets within the enterprise network, each edge representing at least a portion of one or more lateral movement paths between assets in the enterprise network, determining, for each asset, a criticality of the respective asset to operation of a process, determining a lateral movement path between a first node represented by a first asset and a second node represented by second asset within the graph, determining a path value representative of a criticality in preventing an attack through the lateral movement path, and providing an indication of the path value representative of the criticality in preventing an attack through the lateral movement path.
[0048] In the example of FIG. 2, the AgiHack service 208 includes an attack graph (AG) generator 226, an AG database 228, and an analytics module 230. In general, the AgiHack service 208 constructs AGs and evaluates hacking exploitation complexity. In some examples, the AgiHack service 208 understand attack options, leveraging the vulnerabilities to determine how a hacker would move inside the network and identify targets for potential exploitation. The AgiHack service 208 proactively explores adversarial options and creates AGs representing possible attack paths from the adversary's perspective. The AgiHack service 208 provides both active and passive vulnerability scanning capabilities to comply with constraints, and identifies device and service vulnerabilities, configuration problems, and aggregate risks through automatic assessment.
[0049] In further detail, the AgiHack service 208 provides rule-based processing of data provided from the AgiDis service 214 to explore all attack paths an adversary can take from any asset to move laterally towards any target (e.g., running critical operations). In some examples, multiple AGs are provided, each AG corresponding to a respective target within the enterprise network. Further, the AgiHack service 208 identifies possible impacts on the targets. In some examples, the AG generator 226 uses data from the asset/vulnerabilities knowledge base 236 of the AgiDis service 214, and generates an AG. In some examples, the AG graphically depicts, for a respective target, all possible impacts that may be caused by a vulnerability or network/system configuration, as well as all attack paths from anywhere in the network to the respective target. In some examples, the analytics module 230 processes an AG to identify and extract information regarding critical nodes, paths for every source-destination pair (e.g., shortest, hardest, stealthiest), most critical paths, and critical vulnerabilities, among other features of the AG. If remediations are applied within the enterprise network, the AgiHack service 208 updates the AG.
[0050] In the example of FIG. 2, the AgiRem service 210 includes a graph explorer 232 and a summarizer 234. In general, the AgiRem service 210 provides remediation options to avoid predicted impacts. For example, the AgiRem service 210 provides options to reduce lateral movement of hackers within the network and to reduce the attack surface. The AgiRem service 210 predicts the impact of asset vulnerabilities on the critical processes and adversary capabilities along kill chain/attack paths and identifies the likelihood of attack paths to access critical assets and prioritizes the assets (e.g., based on shortest, easiest, stealthiest). The AgiRem service 210 identifies remediation actions by exploring attack graph and paths.
[0051] In further detail, for a given AG (e.g., representing all vulnerabilities, network/system configurations, and possible impacts on a respective target) generated by the AgiHack service 208, the AgiRem service 210 provides a list of efficient and effective remediation recommendations using data from the vulnerability analytics module 236 of the AgiInt service 212. In some examples, the graph explorer 232 analyzes each feature (e.g., nodes, edges between nodes, properties) to identify any condition (e.g., network/system configuration and vulnerabilities) that can lead to cyber impacts. Such conditions can be referred to as issues. For each issue, the AgiRem service 210 retrieves remediation recommendations and courses of action (CoA) from the AgiInt service 212, and/or a security knowledge base (not shown). In some examples, the graph explorer 232 provides feedback to the analytics module 230 for re-calculating critical nodes/assets/paths based on remediation options. In some examples, the summarizer engine 234 is provided as a natural language processing (NLP) tool that extracts concise and salient text from large/unstructured threat intelligence feeds. In this manner, the AgiSec platform can convey information to enable users (e.g., security teams) to understand immediate remediation actions corresponding to each issue.
[0064] As introduced above, implementations of the present disclosure provide for prioritization of actions for remediation of cyber attacks based on lateral movements of a malicious user within a network. More particularly, and as described in further detail herein, implementations of the present disclosure consider the ability of malicious users to access supporting CIs from the network through lateral movements and estimate which attack path should be handled first in order to prevent a comprised CI. In some implementations, a relative importance and complexity of an attack path are determined and cyber actions to block accessing a CI are prioritized. In this manner, cyber actions are efficiently implemented to prevent damage and reduce the attack surface and internals of the network, gradually increasing the entire network cyber resilience.
[0143] Implementations and all of the functional operations described in this specification may be realized in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations may be realized as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium may be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “computing system” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus may include, in addition to hardware, code that creates an execution environment for the computer program in question (e.g., code) that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal (e.g., a machine-generated electrical, optical, or electromagnetic signal) that is generated to encode information for transmission to suitable receiver apparatus.
[0144] A computer program (also known as a program, software, software application, script, or code) may be written in any appropriate form of programming language, including compiled or interpreted languages, and it may be deployed in any appropriate form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program may be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
[0145] The processes and logic flows described in this specification may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows may also be performed by, and apparatus may also be implemented as, special purpose logic circuitry (e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit)).
[0146] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any appropriate kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. Elements of a computer can include a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data (e.g., magnetic, magneto optical disks, or optical disks). However, a computer need not have such devices. Moreover, a computer may be embedded in another device (e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver). Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices); magnetic disks (e.g., internal hard disks or removable disks); magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, special purpose logic circuitry.
PNG
media_image1.png
596
998
media_image1.png
Greyscale
McGrew (US Pub. No. 2024/0333765 A1) is relied upon to teach contextualize the (reads on the combination of taking the full graph and extracting relevant portions/subgraph and converting to understandable attack vectors, see McGrew para 0073 – 0075 and 0081. The Examiner construes extracting the subgraph and converting to attack vectors to be the same as Applicant’s contextualizing because they both perform the same function of taking the full graph and compresses/linearizes into meaningful security info/identifying critical elements) graph of the network (reads on the ontology graph that represents various relations between entities, see McGrew para 0073); generate one or more prompts (reads on the use of the prompt generator, see McGrew para 0078 – 0081) and input (read on the subgraph/attack vectors are input to the prompt generator, see McGrew para 0075 and 0078) the contextualized graph (reads on the combination of taking the full graph and extracting relevant portions/subgraph and converting to understandable attack vectors, see McGrew para 0073 – 0075 and 0081. The Examiner construes extracting the subgraph and converting to attack vectors to be the same as Applicant’s contextualizing because they both perform the same function of taking the full graph and compresses/linearizes into meaningful security info/identifying critical elements) to a Large-Language Model (LLM) (reads on the prompt generator uses a GPT model, see McGrew para 0111); and generate an output that summarizes (reads on the summary that is the output of the prompt generator, see McGrew Figure 2 blocks 230 and 232) the contextualized graph (reads on the combination of taking the full graph and extracting relevant portions/subgraph and converting to understandable attack vectors, see McGrew para 0073 – 0075 and 0081. The Examiner construes extracting the subgraph and converting to attack vectors to be the same as Applicant’s contextualizing because they both perform the same function of taking the full graph and compresses/linearizes into meaningful security info/identifying critical elements) using the LLM (reads on the prompt generator uses a GPT model, see McGrew para 0111).
[0073] FIG. 2 shows an example of an ontology summary system 200 that generates prompts summarizing the security incident giving rise to a threat alert. The ontology summary system 200 has an ontology generator 208 that receives various inputs, including, e.g., a threat alerts 202, a third-party ontologies 204, an additional inputs 206 Based on these inputs, the ontology generator 208 creates an ontology graph 210 that represents various relations between entities of computational instructions that have been executed by a computer/processor. These entities can include files, executable binary, processes, domain names, IP addresses, etc.
[0074] The ontology summary system 200 also has a query generator 214 that creates a query 216 based on values from a telemetry graph database 212, which stores graphs/patterns that represent respective malicious behaviors. The query 216 includes a query graph that is compared to various portions of the ontology graph 210 by the query processor 218. This comparison can be based on the topology (e.g., the spatial relations) and content (e.g., values of the vertices/nodes and relations expressed by the edges). When a match is found, the portion of the ontology graph 210 that matches the query graph is returned as subgraph 220.
[0075] The remainder of the ontology summary system 200 provides a summary 232 of subgraph 220 and then validates the summary and displays it in a graphical user interface (GUI) 236. First, the attack vector generator 222 converts the subgraph 220 of detected malware identified during penetration testing into a plurality of attack vectors 224. An attack vector is a specific route or method that malicious actors could employ to exploit vulnerabilities within a system, network, application, or device. It serves as a meticulously mapped-out pathway that outlines the sequence of steps an attacker might follow to compromise the intended target. The attack vectors with assist in the identification of potential weaknesses that necessitate mitigation to fortify the defenses of a system. These attack vectors encompass a wide array of techniques that can be categorized into various classes. Network-based attacks, for instance, revolve around leveraging vulnerabilities present in network protocols, services, or devices. Examples of these encompass activities such as network sniffing, distributed denial of service (DDOS) attacks, and the execution of Man-in-the-Middle (MitM) attacks that intercept communications.
[0078] Using the attack vectors 224, a policy and configuration generator 226 then generates a policy 228 for the prompt generator 230. Policy 228 directs the prompt generator 230 regarding the substance (e.g., the attack vectors 224) and style of the summary 232 to be created by the prompt generator 230. Policy 228 can include a comprehensive list of known attack vectors relevant to the system or software in consideration. This list could contain vulnerabilities, exploits, malware, and social engineering tactics. For each attack vector identified, policy 228 outlines which specific security measures and configurations are necessary to mitigate or prevent any associated attacks. These measures could encompass updated configurations for network appliances in the wireless network, security controls, wireless network configurations, and network access controls.
[0079] Additionally, the generated policy 228 could include mappings between attack vectors and corresponding security measures to ensure that appropriate steps are taken for each type of attack vector. The mapping could include configurations that are identified as being most effective against specific attack vectors, and malware that has previously penetrated the security system, allowing for the ability to take proactive steps to protect the network and the associated systems and data from malicious actions and attackers. In some examples, the prompt can identify a plurality of relationships between wireless appliances or nodes within the network. For example, the prompt can express more complex relationships between three or more nodes, thereby making broader connections that can help security analysts more quickly comprehend the information expressed by subgraph 220. Thus, security analysts can more quickly assess the a threat alert stimulated by identified penetration of the network system by malware.
[0081] Additionally, the summary 232 can be displayed in the GUI 236. The GUI 236 can include both the text of the summary 232 and a visual representation of the subgraph 220. The subgraph 220 provides ground truth, and the summary 232 provides a more easily comprehended mechanism for understanding the subgraph 220. According to certain non-limiting examples, a user can select a portion of the text of the summary 232, and in response, the GUI 236 highlights a corresponding portion of the subgraph associated with the selected text. Thus, starting from the text of the summary, a security analyst can quickly find the relevant features in the subgraph 220 that correspond to portions of the text of the summary. Then referring to the corresponding region of the subgraph 220, the security analyst can verify that, for the relevant features, the relations expressed in the text are consistent with the corresponding region of the subgraph 220, thereby confirming a correct understanding of the threat.
[0111] FIG. 4A illustrates a block diagram for an example of a transformer neural network architecture, in accordance with certain embodiments. As discussed above, the prompt generator 230 in FIG. 2 can use a transformer architecture 400, such as a Generative Pre-trained Transformer (GPT) model. Additionally or alternatively, the prompt generator 230 can include a Bidirectional Encoder Representations from Transformers (BERT) model. According to certain non-limiting examples, the transformer architecture 400 is illustrated in FIG. 4A through FIG. 4C as including inputs 402, an input embedding block 404, positional encodings 406, an encoder 408 (e.g., encode blocks 410a, 410b, and 410c), a decoder 412 (e.g., decode blocks 414a, 414b, and 414c), a linear block 416, a softmax block 418, and output probabilities 420.
PNG
media_image2.png
1234
824
media_image2.png
Greyscale
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the threat analysis teachings of the prior art of record by integrating the threat analysis teachings of McGrew to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that applying the known LLM/GPT technique of McGrew would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the ability to use the LLM/GPT technology of McGrew to summarize the security graphs of the prior art of record would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such features into similar systems, resulting in an improved system that achieves better analyst comprehension of security threats – due to this being a common goal of all cited art. The motivation to combine the references is applied to all claims below this heading.
Per claim 5, the prior art of record further suggests wherein an alert explanation is included in the output that summarizes the contextualized graph using the LLM (reads on the summary helps analysts understand and assess threat alerts, see McGrew para 0072, and Hassanzadeh para 0051).
Per claim 9, the prior art of record further suggests wherein contextualizing the graph of the network further comprises: compressing the graph (reads on the graph being compressed to a subgraph and further processing/contextualizing the subgraph by converting it into meaningful attack vectors, see McGrew Figure 2 blocks 210 and 220).
Per claim 10, the prior art of record further suggests wherein the processor is further configured to: ground information in context input to the LLM based on a predetermined set of Common Vulnerabilities and Exposures (CVEs) (reads on the threat intelligence knowledge base, that include the exemplary CVE, CAPEC, CWE, Maglan Plexus, iDefense API and vendor-specific databases, see Hassanzadeh para 0047 – 0049).
Claim 11 is analyzed with respect to claim 1.
Claim 15 is analyzed with respect to claim 5.
Claim 16 is analyzed with respect to claim 1.
Claim 20 is analyzed with respect to claim 5.
Claims 2, 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Hassanzadeh in view of McGrew in view of Allen (US Pub. No. 2019/0332784 A1).
Per claim 2, the prior art of record suggests the system of claim 1. The prior art of record is silent on explicitly stating, wherein tenant proprietary data is obfuscated in the graph prior to inputting the graph into the LLM.
Allen (US Pub. No. 2019/0332784 A1) suggests
tenant proprietary data is obfuscated (reads on obfuscation options for masking sensitive content, see Allen para 0004).
[0004] Systems, methods, and software for data obfuscation frameworks for user applications are provided herein. An exemplary method includes providing user content to a classification service configured to process the user content to classify portions of the user content as comprising sensitive content, and receiving from the classification service indications of the user content that contains the sensitive content. The method includes presenting graphical indications in a user interface to the user application that annotate the user content as containing the sensitive content, and presenting obfuscation options in the user interface for masking the sensitive content within at least a selected portion among the user content. Responsive to a user selection of at least one of the obfuscation options, the method includes replacing associated user content with obfuscated content that maintains a data scheme of the associated user content.
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the data processing teachings of the prior art of record by integrating the data obfuscation before data processing teaching of Allen to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that both Allen and the prior art of record operate in the data security domain and applying the known technique of Allen would have yielded predictable results and resulted in an improved system by addressing the well-known risk of sending sensitive data in the clear. It would have been recognized that applying the ability to obfuscate content that maintains a data scheme to the exemplary assets names, IPs, etc. of the graph structure of the prior art of record would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such obfuscation features into similar systems, resulting in an improved system that uses all available known in the art techniques to apply privacy protection to cybersecurity graph processing. The motivation to combine the references is applied to all claims below this heading.
Claim 12 is analyzed with respect to claim 2.
Claim 17 is analyzed with respect to claim 2.
Claims 3, 4, 13, 14, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hassanzadeh in view of McGrew in view of Karabey (US Pub. No. 2023/0224324 A1).
Per claims 3 – 4, the prior art of record suggests the system of claim 1 and explanations are included in the output that summarizes the contextualized graph using the LLM (read on the combination of critical assets and critical paths and provides context about which elements are more important, see Hassanzadeh para 0005, 0050, 0051 and 0064 and the summary includes attack vector information which is reasonably scoped as an attack path explanation, see McGrew Figure 2 block 230 and para 0078 – 0080). The prior art of record is silent on explicitly stating an attack path and critical path explanation.
Karabey (US Pub. No. 2023/0224324 A1) suggests
attack path and critical path explanation (reads on the output includes tactics which are high-level natural language descriptions of behaviors that a threat actor is trying to accomplish and techniques which are detailed descriptions and techniques which are detailed descriptions that represent how the threat actor achieves the tactic, see Karabey para 0022 – 0023 and Figure 1 blocks 114, 116 and 118).
[0004] A method can include receiving, at a compute device, a natural language description of activity on a computer network. The method can include executing, based on the natural language description, a natural language processing (NLP) model to provide a tactic and technique of a cyber attack associated with the natural language description. The method can further include determining, based on the provided tactic and technique, a response to mitigate the cyber attack and implementing the determined response on the computer network.
[0014] Automating the mapping in an AI/NLP driven manner reduces human effort, time to identification of the tactic and technique, and also reduces opportunity for human error. Mapping to tactic and technique in an automated manner also improves post-compromise detection of adversaries by highlighting the steps an attacker may have taken or could take next in a timely fashion. These automated, timely, and accurate mappings of incident description to tactic and technique will also help reduce enterprise risk by helping identify how an attacker got in and how are they laterally traversing the network. The automated mappings will also provide a common ground for sharing timely and accurate threat intel across organizations and reducing time to react to a cyber attack event. The reduced time to react to a cyber attack event increases the chances that the cyber attack will be mitigated and reduces the amount of damage done to a network for a given attack.
[0022] The tactic 222, 224, 226 are high-level descriptions of behaviors that a threat actor (one attempting to carry out a cyber attack) is trying to accomplish. The tactic 222, 224, 226 represents the “why” of a technique 228, 230, 232 in a same column as the tactic 222, 224, 226. For example, initial access is a tactic a threat actor will try to perform to gain access to the network 122.
[0023] Techniques 228, 230, 232 are detailed descriptions that represent how the threat actor achieves the tactic 222, 224, 226. Drive-by compromise, exploit public-facing application, external remote services, hardware additions, phishing, replication through removable media, supply chain compromise, trusted relationship, and valid accounts are all techniques 228, 230, 232 for the tactic of initial access.
[0073] Although a few embodiments have been described in detail above, other modifications are possible. For example, the logic flows depicted in the figures do not require the order shown, or sequential order, to achieve desirable results. Other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Other embodiments may, be within the scope of the following claims.
PNG
media_image3.png
674
981
media_image3.png
Greyscale
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the attack path teachings of the prior art of record by integrating the attack/critical path explanation teaching of Karabey to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that both Karabey and the prior art of record operate in the data security domain helping analysts understand complex attack path information in cybersecurity contexts. It would have been recognized that applying the attack/critical path explanations in the form of natural language processed tactic and technique descriptions would have yielded predictable results because the prior art of record already generates attack paths and already summarizes attack vectors providing the “how” and applying the teachings of Karabey provides “what” content to include (attack/critical paths and tactics/techniques), resulting in an improved system that uses all available known in the art techniques to address analyst comprehension concerns. The motivation to combine the references is applied to all claims below this heading.
Claim 13 is analyzed with respect to claim 3.
Claim 14 is analyzed with respect to claim 4.
Claim 18 is analyzed with respect to claim 3.
Claim 19 is analyzed with respect to claim 4.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Hassanzadeh in view of McGrew in view of Hayes (US Pub. No. 2025/0045531 A1).
Per claim 6, the prior art of record suggests the system of claim 1. The prior art of record is silent on explicitly stating, wherein guardrails are used to reduce hallucinations in the output generated using the LLM.
Hayes (US Pub. No. 2025/0045531 A1)
is relied upon to suggest guardrails are used (reads on a hallucination and harmful-content checking stage configured to analyze and transform the response from the second LLM to remove or correct hallucinations and harmful content, see Hayes Figure 5 block 540 and para 0054 and 0056 – 0058) to reduce hallucinations in the output generated (reads on to remove or correct hallucinations and harmful content, see Hayes Figure 5 block 540 and para 0054 and 0056 – 0058) using the LLM (reads on in the output of the second LLM, see Hayes Figure 5 and para 0054 and 0056 – 0058).
[0053] FIG. 5 is a flowchart showing an exemplary sequence of steps that may be performed using a Generative AI framework 400 comprising multiple interconnected LLMs 300a, 300b, and 300c in accordance with certain disclosed embodiments of the invention. The sequence starts at step 500 and proceeds to step 510 where the Generative AI framework 400 receives a user prompt. In some embodiments, the received user prompt may have been communicated to the framework by a user 120 over the network 110, for example, using a cloud service or application specific programming functional call sent to a server 200. At step 520, a first LLM 300a processes the received user prompt to generate an updated user prompt, for example, as part of a first stage of the framework 400. The first LLM 300a may be used to detect and transform a jailbreaking/malicious user prompt, detect and transform an out-of-scope question(s) in the received user prompt, and thereby transform the received user prompt into an updated prompt that is better suited for generating a response using a second LLM 300b.
[0054] Next, at step 530, the updated user prompt is input to the second LLM 300b which, in turn, processes the updated user prompt to generate a response to the user prompt. The generated output response from the second LLM 300b is input to a third LLM 300c at step 540. The third LLM 300c processes the response that it received from the second LLM 300b to generate an updated response. The third LLM 300c may provide a hallucination and harmful-content checking stage, for example, configured to analyze and transform the generated response from the second LLM 300b to remove or correct AI hallucinations and harmful content. In this exemplary sequence of steps, at step 550, the updated response generated by the third LLM 300c is output from the Generative AI framework 400 to return the requesting user 120. The sequence ends at step 560.
[0055] Those skilled in the art will understand that the multi-staged Generative AI framework 400 may apply to any type of Generative AI system or method. Accordingly, although the Generative AI framework 400 is described in the disclosed embodiments in the context of generative text-based systems, such as chatbots and other online AI systems that provide textual answers to user prompts, in other alternative embodiments the multi-staged Generative AI framework 400 may be employed in other types of Generative AI systems and methods, such as for generating images, art, music, code, data, molecules, and/or other information based on input prompts provided by users.
[0056] The foregoing description has been directed to specific embodiments. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For instance, it is expressly contemplated that the components and/or elements described herein can be implemented as software being stored on a tangible (non-transitory) computer-readable medium (e.g., disks/CDs/RAM/EEPROM/etc.) having program instructions that may be executed on a computer, hardware, firmware, or a combination thereof. It also will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Further, the invention is not limited to any particular hardware platform or set of software capabilities.
[0057] While the disclosed embodiments have been described with reference to certain exemplary schematic block diagrams and flowcharts, those skilled in the art will appreciate that other variations and configurations are possible within the scope of the invention. For example, one or more of the exemplary functional modules disclosed herein may be combined or otherwise implemented within a single functional module. Similarly, one or more of the disclosed steps in the exemplary flow diagram of FIG. 5 may be combined or otherwise integrated with other disclosed steps. In some embodiments, the disclosed steps of the flow diagram may be performed in different orders than shown in the exemplary process of FIG. 5. Accordingly, the components of block diagrams and flow diagrams (e.g., modules, blocks, structures, devices, steps, features, etc.) may be variously combined, separated, removed, reordered, and replaced in a manner other than as expressly described and depicted herein.
[0058] While the disclosed embodiments illustrate various processes, it is expressly contemplated that various processes may be embodied as modules configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). Further, while certain processes have been shown or described separately, those skilled in the art will appreciate that the disclosed processes may be routines or modules within other processes.
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the LLM teachings of the prior art of record by integrating the LLM hallucination prevention teachings of Hayes to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that modern LLM systems are prone to hallucinations and having a system to minimize hallucinations would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the ability to remove and correct hallucinations to the LLM teachings of the prior art of record would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate hallucination removal and corrective features into similar systems, resulting in an improved system that uses all available known in the art techniques to improve LLM output reliability. The motivation to combine the references is applied to all claims below this heading.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Hassanzadeh in view of McGrew in view of Larson (US Pub. No. 20250315524 A1).
Per claim 7, the prior art of record further suggests wherein the one or more prompts include one or more predetermined prompts that are input to the LLM (The Examiner construes this to be an obvious limitation of the prior art’s disclosure of including into the prompt generator a comprehensive list of known attack vectors as inputs which would reasonably produce known/predetermined prompts, see McGrew para 0073 – 0075, 0078 and 0081); however, the prior art of record does not explicitly state one or more prompts include one or more predetermined prompts that are input to the LLM.
Larson (US Pub. No. 20250315524 A1) is relied upon to teach
one or more prompts include one or more predetermined prompts that are input to the LLM (see Larson para 0048).
[0048] FIG. 3 illustrates an architecture 300 showing details of the LLM orchestrator 202 communicating with the suite of tools 106 and the LLM 204. To understand the function of a binary 104, particularly malware, the LLM 204 may be prompted to investigate secrets. This begins with the LLM orchestrator 202. The LLM orchestrator 202 may send a predetermined prompt to the LLM 204 along communication path 302. As mentioned above, the LLM orchestrator 202 may have a number of predetermined prompts that it provides to the LLM 202 when certain conditions occur. This prompting to discover secrets can cause the LLM 204 to take further action when it cannot understand the meaning of code (e.g., garbled, packed, or obfuscated code) by using the suite of tools 106, in a variety different ways if necessary, in order to gain understanding. For example, the LLM 204 may be able to recognize a command and control string as such and then continue its investigation to determine the function of the command and control string.
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the LLM teachings of the prior art of record by integrating the LLM predetermined prompt teachings of Larson to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that having predetermined prompts to address certain conditions would have yielded predictable results and resulted in an improved system in order to minimize variability when consistent events occur. It would have been recognized that applying the ability to have predetermined prompts to the LLM teachings of the prior art of record would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate prompts into similar systems, resulting in an improved system that uses all available known in the art techniques to improve LLM output reliability. The motivation to combine the references is applied to all claims below this heading.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Hassanzadeh in view of McGrew in view of Binyamini (US Pub. No. 2025/0315519 A1).
Per claim 8, the prior art of record suggests the system of claim 1. The prior art of record is silent on explicitly stating, wherein the graph is stored in a JavaScript Object Notation (JSON) format for input and/or output.
Binyamini (US Pub. No. 2025/0315519 A1) is relied upon to teach
the graph is stored in a JavaScript Object Notation (JSON) format for input and/or output (reads on the attack flow graph may be generated in JSON format, see Binyamini para 0050).
[0050] Upon generating the subgraph, the attack flow graph generation module 230 generates an attack flow graph based on the generated subgraph, the cyber-attack report 201 and an attack flow schema 255. The attack flow schema 255 is a structured framework or blueprint that outlines how data or information is to be organized and represented, and defines the structure and rules for valid data, including element types, attributes, and relationships. In the present implementation, the attack flow schema 255 defines the entry point on how the attack is initiated, conditions, operators, attack actions and outcomes. Based on the attack flow schema 255, properties for each of node of the graph is updated using the cyber-attack report 201 to generate the attack flow graph. For example, based on the schema, relevant properties are added to each node. The properties include, but are not limited to, unique identifier (ID) for each stage in the attack flow, purpose for referencing each stage, description explaining the process at each stage, relationship between the IDs, and tools used in the attack. Such information is added while generating the attack flow graph by referring to the attack flow schema 255 and using the generated subgraph, the cyber-attack report 201. The generated attack flow graph is stored in an attack flow graph database 260. The attack flow graph database 260 (attack flow knowledgebase) storing a plurality of attack flow graphs may be used by the experts or systems for various purpose including, but are not limited to, threat modeling incident response, vulnerability assessment, risk management, policy development, etc. In one embodiment, the attack flow graph is a structured graph and generated in JSON format. However, the attack flow graph may be generated in any other know formats.
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the attack graph teachings of the prior art of record by integrating the attack graph generating format teachings of Binyamini to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that applying the known attack graph generating format of Binyamini would have yielded predictable results and resulted in an improved system. It would have been recognized that formatting the attack graphs of the prior art of record in the well-known JSON format as taught by Binyamini would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such formatting features into similar systems, resulting in an improved system that uses all available known in the art formatting techniques to facilitate transportation of the graphs. The motivation to combine the references is applied to all claims below this heading.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brian Shaw whose telephone number is ((571)270-5191. The examiner can normally be reached on Mon-Thurs from 6:00 AM-3:30 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeff Nickerson can be reached on (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 703-872-9306.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRIAN F SHAW/
Primary Examiner, Art Unit 2432