DETAILED ACTION
This communication is responsive to the Applicant’s arguments for Application No. 18/775,550 filed on 02/13/2026. Claims 1,5, 9,17,19 have been currently amended. Claim 18 has been cancelled. Claims 1-17,19-20 are pending examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to arguments
35 U.S.C. § 101 Rejections
Applicants’ arguments with respect to claims 1-20 being rejected under 35 U.S.C §101 have been fully considered and are persuasive. Hence, the rejection under 35 U.S.C §101 has been withdrawn.
35 U.S.C. § 103 Rejections
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on the reference applied in the prior rejection of record for the teaching or matter specifically challenged in the argument.
Applicant argues that “the claims include a “smart routing” construct for assigning vulnerabilities to human remediation agents using enterprise directory profiles and measured remediation history, together with an acceptance-and-assignment workflow. The amended claims introduce a specific mechanism for selecting a human remediation agent using a trained model that computes an agent–vulnerability match score based on enterprise directory attributes and remediation history” and “the agent data include specialization categories aligned to a software-category hierarchy, years of experience, certifications, performance scores, current task load and availability, device identifiers, and remediation history weighted by the model”.
This amendment has necessitated a new ground of rejection Sand et al. (US 20240236137 A1) hereinafter referred to as Sand in view of McCarthy et al. (US 20230421582 A1) hereinafter referred to as McCarthy. McCarthy discloses a smart routing construct where threat management includes detecting new cybersecurity threats and assigning those threats to one or more analysts for action, and an analyst can be selected for the assignment based on an analyst threat response profile. This profile is produced by analyzing triage results from a security operation center caseload history and can include qualifications, certifications, trainings, experience, success rate along with metrics such as initial response time, closure response time and peer interaction metric. McCarthy also considers present workload and availability, stating that the selected analyst may be unavailable because the caseload is heavy or full and that cases may be reassigned to free the analyst. This is a profile-driven, workload-aware routing mechanism for assigning a human remediation agent.
McCarthy discloses an analyst threat response profile which is the same type of structured agent-selection data. Additionally, the remediation history and a trained model to produce the equivalent of an agent-vulnerability match score are disclosed. Neural-network training in which a training dataset includes cybersecurity operations center caseload histories, resolutions to cybersecurity threats and threat response resolution metrics, one or more weights associated with each node are adjusted until the neural network can form an inference that produces the expected result, is being taught. Also, it teaches converting the cybersecurity workflow data into machine-learning training data, training a neural network, and then executing the analyzing/triaging/generating operations on the trained network. Here, the trained-network inference is used to select the best analyst based on prior caseload history and resolution performance is the agent-vulnerability match score.
Performance scores are taught by success rate, initial response time, closure response time and peer interaction metric. The current task load and availability disclose heavy/full caseload and reassignment logic. The software-specialization attributes align to a software hierarchy and are analogous to the system using operating system and software configuration information including users on devices having similar operating systems or software configurations, enterprise standards such as OS, applications, antivirus applications, VPN apps. Device identification data is disclosed by source IP address and port, destination IP address and port, username, network client IP identification data.
McCarthy teaches the analyst-facing notification/assignment workflow. It discloses that the system receives and handles analyst-facing inputs and outputs including an SMS message, an email message, a graphical display, a proposed action and a recommended technique, and also discloses assignment and reassignment of threats to analyst. This describes transmitting analyst-facing alert content with recommended remediation action and then assigning the case to the selected analyst through the workflow system. McCarthy accesses network-connected cybersecurity threat protection applications including firewalls, antivirus and intrusion-detection type tools that receive threat inputs including source/destination IP and port and network configuration data, analyzes metadata including known vulnerabilities and known security settings, uses table lookup analysis or machine learning algorithm-based analysis to verify and compare notifications. It also discloses firewall techniques used to block traffic attempting to penetrate the network through ports and communication protocols, this input and metadata is mapped and generates a cybersecurity threat response including responses to zero-day threats. Hence, McCarthy supports vulnerability identification and firewall-response framework as amended
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4,9-13,17,19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sand et al. (US 20240236137 A1) hereinafter referred to as Sand in view of McCarthy et al. (US 20230421582 A1) hereinafter referred to as McCarthy.
As per claim 1, Sand discloses a system for vulnerability smart routing comprising a plurality of devices, each device having an address at at least one port, a first computing device that includes at least one processor and a memory device that stores executable code that, when executed, causes the at least one processor to:
(c) perform a classification analysis to determine the vulnerability’s classification by using (The RF classification method is a collection of decision trees that can predict or make a recommendation based on vulnerability input data. That is, each individual decision tree includes branches that classify vulnerability data according to their characteristics (e.g., type of vulnerability, VCSS score, rating, year of occurrence, product version, ongoing threat intelligence inputs, etc.), Sand, para [0093]).
(d) perform a categorical analysis to generate category data by using the (The vulnerability score V.sub. T and first sub-score V.sub.T1 may also be dependent upon MC, a Modified Integrity (MI) metric and a Modified Availability (MA) which are known metrics that are defined by CVSS. MC, MI and MA are, essentially, modifiers to the CVSS Base metrics and are designed to account for the aspects of target enterprise 710 that can increase or decrease the severity of exploitable vulnerability 716, Sand, para [0106]).
However, Sand does not explicitly disclose the limitations:
(a) scan a range of ports for each of the plurality of devices using the respective address and port by passing a script to the port of each device and capturing a response, wherein the response comprises software configuration data associated with each device
(b) detect a vulnerability by comparing the software configuration data for each device against a database of known vulnerabilities
(e) map the vulnerability to firewall attack signatures to update a firewall for a particular network, wherein the attack signatures define regular expressions, formatting, identifiers, structures, rules, policies, or other means with which the firewall can detect the vulnerability;
(f) generate vulnerability display data that comprises alphanumeric text that displays a vulnerability remediation task, images, graphics, layout data, and computer-readable instructions that, when executed, cause a remediation agent computing device to render a graphical user interface;
(g) perform a remediation analysis to determine and select a human remediation agent to notify using the vulnerability category data and remediation agent data, wherein the remediation agent data comprises, for the human remediation agent, at least a software specialization category mapped to a hierarchy of operating systems, system software, application software, and programming software, years of experience, certifications, performance indicators, current task load and availability, remediation agent computing device identification data, and remediation history; and wherein the remediation analysis applies a trained model that weights the remediation history to produce an agent–vulnerability match score; and
(h) transmit to the selected human remediation agent’s computing device an interactive remediation assignment alert comprising vulnerability display data and an accept-assignment control implemented as a selectable function or link included in the vulnerability display data, the accept-assignment control being configured, when selected, to (i) accept the vulnerability remediation task and (ii) redirect the selected human remediation agent’s computing device to a webpage, desktop application, or mobile application presenting the graphical user interface with additional information regarding the vulnerability; and, responsive to detecting selection of the accept-assignment control, assign the vulnerability remediation task to the selected human remediation agent.
McCarthy discloses:
(a) scan a range of ports for each of the plurality of devices using the respective address and port by passing a script to the port of each device and capturing a response, wherein the response comprises software configuration data associated with each device (The threat protection inputs can include source/destination IP and port data, network configuration data and includes white-hat penetration testing and threat hunting, and that threat hunting can involve iteratively searching network connected devices, McCarthy, para [0086]. Here, the plurality of network connected cybersecurity threat protection applications are the scanning tools, the port fields correspond to the addressed ports and the returned input/notification data is the captured response, the network configuration data is the configuration data)
(b) detect a vulnerability by comparing the software configuration data for each device against a database of known vulnerabilities (A security metric can include known vulnerabilities of the device or known vulnerabilities based on what the user's access privileges, McCarthy, para [0031], [0061], [0062]. Here, the inputs can include the utilization of discovered vulnerabilities and the supervisory workflow can perform table lookup analysis, the SOAR includes a threat and vulnerability management component. The incoming device metadata including network-configuration data is compared via table-lookup/ threat and vulnerability processing against stored known-vulnerability information to identify the vulnerability which is analogous to CVE-comparison)
(e) map the vulnerability to firewall attack signatures to update a firewall for a particular network, wherein the attack signatures define regular expressions, formatting, identifiers, structures, rules, policies, or other means with which the firewall can detect the vulnerability; (The application capabilities can include firewall 338 techniques. Firewall techniques can be used to block network traffic, applications, etc. that can attempt to penetrate a network and IT infrastructure using one or more network ports and communications protocols, McCarthy, para [0052]. Here, firewall techniques can block traffic that attempts to penetrate the network using network ports and communication protocols. The system can then map different threat inputs/ formats to the same threat and the SOAR can configure/control infrastructure and can update software/firmware and install security software).
(f) generate vulnerability display data that comprises alphanumeric text that displays a vulnerability remediation task, images, graphics, layout data, and computer-readable instructions that, when executed, cause a remediation agent computing device to render a graphical user interface; (Information associated with cybersecurity management can be rendered on a display 714 connected to the one or more processors 710. The display can comprise a television monitor, a projector, a computer monitor (including a laptop screen, a tablet screen, a netbook screen, and the like), a smartphone display, a mobile device, or another electronic display and a proposed action, McCarthy, para [0081]).
(g) perform a remediation analysis to determine and select a human remediation agent to notify using the vulnerability category data and remediation agent data, wherein the remediation agent data comprises, for the human remediation agent, at least a software specialization category mapped to a hierarchy of operating systems, system software, application software, and programming software, years of experience, certifications, performance indicators, current task load and availability, remediation agent computing device identification data, and remediation history; and wherein the remediation analysis applies a trained model that weights the remediation history to produce an agent–vulnerability match score; and (Threats are assigned to analysts and the analyst is selected based on an analyst threat response profile. The profile can include analyst qualifications, certifications, training, experience, success rate, and so on. It is augmented with metrics like initial response time, closure response time, SOC caseload histories, threat-response resolutions and resolution metrics as neural-network training data, McCarthy, para [0049]. Here, the human remediation agent is an analyst and the human agent data is the analyst threat-profile and performance metrics, the remediation history is the SOC caseload history and resolution to prior threat, and the score is the selection based on the learned profile)
(h) transmit to the selected human remediation agent’s computing device an interactive remediation assignment alert comprising vulnerability display data and an accept-assignment control implemented as a selectable function or link included in the vulnerability display data, the accept-assignment control being configured, when selected, to (i) accept the vulnerability remediation task and (ii) redirect the selected human remediation agent’s computing device to a webpage, desktop application, or mobile application presenting the graphical user interface with additional information regarding the vulnerability; and, responsive to detecting selection of the accept-assignment control, assign the vulnerability remediation task to the selected human remediation agent. (The generating a cybersecurity threat response can include generating a notification, where the notification can be used to trigger a variety of responses and these notifications/inputs can be sent as SMS, email or graphical display with proposed actions/ recommended techniques. Assigning threats to analysts and load balancing/ reassigning analyst workloads is done, McCarthy, para [0057]. Here, the notification to the analyst device is the assignment alert and the displayed proposed recommended action is the displayed remediation content, the assignment corresponds to assigning the remediation task to the selected human agent)
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand and McCarthy to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) .It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand and McCarthy in order to effectively use a routing construct to assign vulnerabilities to agents based on agent data (See McCarthy, para [0057]).
As per claim 2, Sand and McCarthy discloses the system of claim 1, wherein:
Furthermore, Sand discloses:
(a) the first computing device comprises at least one neural network; and (First local device node retrieves CVSS score based on ML recommendation system 208, Sand, para [0023]).
(b) the at least one neural network is used to determine the vulnerability's classification (The classification method may correspond to artificial neural networks (ANN), Sand, para [0053]).
As per claim 3, Sand and McCarthy discloses the system of claim 2, wherein
Furthermore, Sand discloses:
the at least one neural network is configured with a support vector machine network architecture (The classification method may correspond to support vector machines (SVM), Sand, para [0053]).
As per claim 4, Sand and McCarthy discloses the system of claim 2, wherein
Furthermore, Sand discloses:
the at least one neural network comprises a convolutional neural network architecture (The classification method may correspond to artificial neural networks (ANN), Sand, para [0053]. CNN is a specialized type of ANN, specifically designed for processing data with a grid-like structure, such as images).
As per claim 9, Sand discloses a system for vulnerability smart routing comprising a plurality of devices, each device having an address and at least one port, a first computing device that comprises a first processor and a first memory device storing data and executable code that, when executed, causes the first processor to:
(c) classify the vulnerability by comparing (The RF classification method is a collection of decision trees that can predict or make a recommendation based on vulnerability input data. That is, each individual decision tree includes branches that classify vulnerability data according to their characteristics (e.g., type of vulnerability, VCSS score, rating, year of occurrence, product version, ongoing threat intelligence inputs, etc.), Sand, para [0093]).
(d) categorize the vulnerability by comparing stored vulnerability data and to output vulnerability category data; (The vulnerability score V.sub. T and first sub-score V.sub. T1 may also be dependent upon MC, a Modified Integrity (MI) metric and a Modified Availability (MA) which are known metrics that are defined by CVSS. MC, MI and MA are, essentially, modifiers to the CVSS Base metrics and are designed to account for the aspects of target enterprise 710 that can increase or decrease the severity of exploitable vulnerability 716, Sand, para [0106]).
However, Sand does not explicitly disclose the limitation:
(a) scan a range of ports for each of the plurality of devices using the respective address and port by passing a script to the port of each device and capturing a response, wherein the response comprises software configuration data associated with each device
(b) detect a vulnerability by comparing the software configuration data for each device against a database of known vulnerabilities
(e) provide a machine-learning software module and training data, wherein the machine-learning software module causes the first processor to:
i. iteratively train, using the training data, a neural network to generate simulated vulnerability data, ii. insert the training data into an iterative training and testing loop to predict a target vulnerability, iii. repeatedly determine, during each iteration of the training and testing loop, the target vulnerability, wherein each iteration of the training and testing loop has differing weights assigned to one or more nodes of the neural network, each of the differing weights being updated with each iteration of the training and testing loop to reduce error in predicting the target vulnerability and improve predictability of the neural network, thereby creating a trained neural network, and iv. deploy the trained neural network;
(f) match the vulnerability to a human remediation agent using the vulnerability category data and remediation agent data wherein the remediation agent data comprises, for each human remediation agent, at least a software specialization category mapped to a hierarchy of operating systems, system software, application software, and programming software, years of experience, certifications, performance indicators, current task load and availability, remediation agent computing device identification data, and remediation history; and wherein a remediation analysis applies a trained model that weights the remediation history to produce an agent–vulnerability match score;
McCarthy discloses:
(a) scan a range of ports for each of the plurality of devices using the respective address and port by passing a script to the port of each device and capturing a response, wherein the response comprises software configuration data associated with each device (The threat protection inputs can include source/destination IP and port data, network configuration data and includes white-hat penetration testing and threat hunting, and that threat hunting can involve iteratively searching network connected devices, McCarthy, para [0086]. Here, the plurality of network connected cybersecurity threat protection applications are the scanning tools, the port fields correspond to the addressed ports and the returned input/notification data is the captured response, the network configuration data is the configuration data)
(b) detect a vulnerability by comparing the software configuration data for each device against a database of known vulnerabilities (A security metric can include known vulnerabilities of the device or known vulnerabilities based on what the user's access privileges, McCarthy, para [0031], [0061], [0062]. Here, the inputs can include the utilization of discovered vulnerabilities and the supervisory workflow can perform table lookup analysis, the SOAR includes a threat and vulnerability management component. The incoming device metadata including network-configuration data is compared via table-lookup/ threat and vulnerability processing against stored known-vulnerability information to identify the vulnerability which is analogous to CVE-comparison)
(e) provide a machine-learning software module and training data, wherein the machine-learning software module causes the first processor to:
i. iteratively train, using the training data, a neural network to generate simulated vulnerability data, (The accessing, the receiving, the analyzing, the triaging, and the generating are converted to machine learning training data, McCarthy, para [0088])
ii. insert the training data into an iterative training and testing loop to predict a target vulnerability, (The simulated or test inputs can be used to determine the efficacy of detecting a threat and generating one or more inputs based on the threat. The simulated or test inputs can be used to test various threat scenarios. The testing can be based on simulation, emulation, hypothesis testing, and the like, McCarthy, para [0030]).
iii. repeatedly determine, during each iteration of the training and testing loop, the target vulnerability, wherein each iteration of the training and testing loop has differing weights assigned to one or more nodes of the neural network, each of the differing weights being updated with each iteration of the training and testing loop to reduce error in predicting the target vulnerability and improve predictability of the neural network, thereby creating a trained neural network, and (The training of the neural network can include providing training data to the neural network, observing inferences formed by the neural network, adjusting weights associated with nodes within the neural network. The flow 100 further includes executing 174 the analyzing, the triaging, and the generating on the neural network that was trained, McCarthy, para [0040])
iv. deploy the trained neural network; (The SOAR system can comprise a cybersecurity threat management entity, where the cybersecurity threat management entity can be based on software, hardware such as specialized hardware, a suite of software tools or applications, McCarthy, para [0040]).
(f) match the vulnerability to a human remediation agent using the vulnerability category data and remediation agent data wherein the remediation agent data comprises, for each human remediation agent, at least a software specialization category mapped to a hierarchy of operating systems, system software, application software, and programming software, years of experience, certifications, performance indicators, current task load and availability, remediation agent computing device identification data, and remediation history; and wherein a remediation analysis applies a trained model that weights the remediation history to produce an agent–vulnerability match score; (Threats are assigned to analysts and the analyst is selected based on an analyst threat response profile. The profile can include analyst qualifications, certifications, training, experience, success rate, and so on. It is augmented with metrics like initial response time, closure response time, SOC caseload histories, threat-response resolutions and resolution metrics as neural-network training data, McCarthy, para [0049]. Here, the human remediation agent is an analyst and the human agent data is the analyst threat-profile and performance metrics, the remediation history is the SOC caseload history and resolution to prior threat, and the score is the selection based on the learned profile).
(g) select a human remediation agent based on the agent-vulnerability match score and generate vulnerability display data associated with the vulnerability; (Threats are assigned to analysts and the analyst is selected based on an analyst threat response profile, para [0049]).
(h) transmit to the selected human remediation agent’s computing device an interactive remediation assignment alert comprising the vulnerability display data and an accept-assignment control implemented as a selectable function or link included in the vulnerability display data, the accept-assignment control being configured, when selected, to: i. accept a vulnerability remediation task, and ii. redirect the selected human remediation agent’s computing device to a webpage, a desktop application, a mobile application presenting a graphical user interface presenting additional information regarding the vulnerability, iii. and, responsive to detecting selection of the accept assignment control, assign the vulnerability remediation task to the selected human remediation agent (The generating a cybersecurity threat response can include generating a notification, where the notification can be used to trigger a variety of responses and these notifications/inputs can be sent as SMS, email or graphical display with proposed actions/ recommended techniques. Assigning threats to analysts and load balancing/ reassigning analyst workloads is done, McCarthy, para [0057]. Here, the notification to the analyst device is the assignment alert and the displayed proposed recommended action is the displayed remediation content, the assignment corresponds to assigning the remediation task to the selected human agent).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand and McCarthy to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) .It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand and McCarthy in order to effectively use a routing construct to assign vulnerabilities to agents based on agent data (See McCarthy, para [0057]).
As per claim 10, Sand and McCarthy discloses a system of claim 9, wherein:
Furthermore, Sand discloses:
(a) the first computing device comprises at least one neural network; and (First local device node retrieves CVSS score based on ML recommendation system 208, Sand, para [0023]).
(b) the at least one neural network is used to determine the vulnerability's classification (The classification method may correspond to artificial neural networks (ANN), Sand, para [0053]).
As per claim 11, Sand and McCarthy disclose the system of claim 10, wherein
Furthermore, Sand discloses:
the at least one neural network is configured with a support vector machine network architecture (The classification method may correspond to support vector machines (SVM), Sand, para [0053]).
As per claim 12, Sand and McCarthy discloses the system of claim 9, wherein:
Furthermore, Sand discloses:
(a) the first computing device comprises at least one neural network; and (First local device node retrieves CVSS score based on ML recommendation system 208, Sand, para [0023]).
(b) the at least one neural network is used to determine the vulnerability's category (ML recommendation system 208 can recognize a pattern, Sand, para [0053]).
As per claim 13, Sand and McCarthy disclose the system of claim 12, wherein the at least one neural network is configured with a support vector machine network architecture (The classification method may correspond to support vector machines (SVM), Sand, para [0053]).
As per claim 17, Sand and McCarthy disclose a system of claim 9, wherein
Furthermore, Sand discloses:
when the selected human remediation agent is notified of the vulnerability, the selected human remediation agent either removes or patches the vulnerability (Vulnerability scoring system 106 can, based on the vulnerability score, automatically apply patches or fixes to webserver application 112, Sand, para [0087]).
As per claim 19, Sand discloses a system for vulnerability smart routing comprising a first computing device that includes at least one processor and a memory device that stores executable code that, when executed, causes the at least one processor to:
(c) compare the system configuration data against known vulnerability data to generate a vulnerability set, wherein the vulnerability set comprises a plurality of vulnerabilities that are each associated with a network computing device or software application; (Continuous monitoring system 206 (FIG. 2) feeds historical vulnerability information, Sand, para [0090]).
(d) perform a classification analysis to determine a classification of a vulnerability (The RF classification method is a collection of decision trees that can predict or make a recommendation based on vulnerability input data. That is, each individual decision tree includes branches that classify vulnerability data according to their characteristics (e.g., type of vulnerability, VCSS score, rating, year of occurrence, product version, ongoing threat intelligence inputs, etc.), Sand, para [0093]).
(h) receive remediation software code from the selected human remediation agent computing device that removes or patches the vulnerability, wherein the software code can be authored or curated by the selected human remediation agent; and (At block 410, method 400 involves generating a recommendation machine learning model 508 (FIG. 5A) to provide recommendations to remediate the vulnerability based upon the vulnerability score Vx. As discussed above, in one example, the recommendations may provide updated ratings for the vulnerability to facilitate remediation. In an example, the recommendations may update the control strength to facilitate the remediation, Sand, para [0088]).
However, Sand does not explicitly disclose the limitations:
(a) scan a network to detect accessible computing devices and software applications; (b) catalog system configuration data and vulnerabilities of each computing device and software application detected during the scan;
(e) map the vulnerability to firewall attack signatures to update a firewall for the network, wherein the attack signatures define regular expressions, formatting, identifiers, structures, rules, policies, or other means with which the firewall can detect the vulnerability set
(f) assign and select a human remediation agent to a vulnerability in the vulnerability set by performing a remediation analysis using remediation agent data and remediation history data, wherein the remediation analysis generates a match score for each of a plurality of human remediation agents profiled with at least specialization category, experience data, certification data, performance data, and current task load, and select a human remediation agent based on the match score; and
(g) generate an interactive remediation assignment alert comprising vulnerability display data and an accept-assignment control implemented as a selectable function or link included in the vulnerability display data, the accept‑assignment control being configured, when selected, to (i) accept a vulnerability remediation task and (ii) redirect the selected human remediation agent’s computing device to a webpage, desktop application, or mobile application presenting a graphical user interface with additional information regarding the vulnerability; and, responsive to detecting selection of the accept‑assignment control, assign the vulnerability remediation task to the selected human remediation agent; wherein the interactive remediation assignment alert
(i) deploy the remediation software code within the network computing device or software application associated with the vulnerability
McCarthy discloses:
(a) scan a network to detect accessible computing devices and software applications; (b) catalog system configuration data and vulnerabilities of each computing device and software application detected during the scan; (The threat protection inputs can include source/destination IP and port data, network configuration data and includes white-hat penetration testing and threat hunting, and that threat hunting can involve iteratively searching network connected devices, McCarthy, para [0086]. Here, the plurality of network connected cybersecurity threat protection applications are the scanning tools, the port fields correspond to the addressed ports and the returned input/notification data is the captured response, the network configuration data is the configuration data)
(e) map the vulnerability to firewall attack signatures to update a firewall for the network, wherein the attack signatures define regular expressions, formatting, identifiers, structures, rules, policies, or other means with which the firewall can detect the vulnerability set (The cybersecurity threat protection applications can provide capabilities such as endpoint protection, anti-phishing, antivirus, firewalls, and so on. Insider threats can result from overly permissive access to sensitive areas or data, lax firewall policies, etc., McCarthy, para [0063], [0073]).
(f) assign and select a human remediation agent to a vulnerability in the vulnerability set by performing a remediation analysis using remediation agent data and remediation history data, wherein the remediation analysis generates a match score for each of a plurality of human remediation agents profiled with at least specialization category, experience data, certification data, performance data, and current task load, and select a human remediation agent based on the match score; and (Threats are assigned to analysts and the analyst is selected based on an analyst threat response profile. The profile can include analyst qualifications, certifications, training, experience, success rate, and so on. It is augmented with metrics like initial response time, closure response time, SOC caseload histories, threat-response resolutions and resolution metrics as neural-network training data, McCarthy, para [0049]. Here, the human remediation agent is an analyst and the human agent data is the analyst threat-profile and performance metrics, the remediation history is the SOC caseload history and resolution to prior threat, and the score is the selection based on the learned profile)
(g) generate an interactive remediation assignment alert comprising vulnerability display data and an accept-assignment control implemented as a selectable function or link included in the vulnerability display data, the accept‑assignment control being configured, when selected, to (i) accept a vulnerability remediation task and (ii) redirect the selected human remediation agent’s computing device to a webpage, desktop application, or mobile application presenting a graphical user interface with additional information regarding the vulnerability; and, responsive to detecting selection of the accept‑assignment control, assign the vulnerability remediation task to the selected human remediation agent; wherein the interactive remediation assignment alert (The generating a cybersecurity threat response can include generating a notification, where the notification can be used to trigger a variety of responses and these notifications/inputs can be sent as SMS, email or graphical display with proposed actions/ recommended techniques. Assigning threats to analysts and load balancing/ reassigning analyst workloads is done, McCarthy, para [0057]. Here, the notification to the analyst device is the assignment alert and the displayed proposed recommended action is the displayed remediation content, the assignment corresponds to assigning the remediation task to the selected human agent)
(i) deploy the remediation software code within the network computing device or software application associated with the vulnerability. (The analysis can include determining a source or vector of a virus, the actions taken by the virus, how to counter actions taken by the virus, to whom the virus might be in communication, etc. The antivirus analysis can be used to determine changes or updates to the virus, and how to better detect the virus before it can be deployed, McCarthy, para [0074])
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand and McCarthy to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) .It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand and McCarthy in order to effectively use a routing construct to assign vulnerabilities to agents based on agent data (See McCarthy, para [0057]).
As per claim 20, Sand and McCarthy discloses the system of claim 19, wherein
Furthermore, Sand discloses:
the first computing device comprises at least one neural network (First local device node retrieves CVSS score based on ML recommendation system 208, Sand, para [0023]).
Claims 5,6,7,14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Sand et al. (US 20240236137 A1) hereinafter referred to as Sand in view of McCarthy et al. (US 20230421582 A1) in further view of Oliphant et al. (US 20150033323 A1), hereinafter referred to as Oliphant.
As per claim 5, Sand and McCarthy disclose the system of claim 1, wherein running the executable code stored to a second memory device causes a second processor to:
Furthermore, Sand discloses:
(c) generate historical data using the vulnerability database record; (Continuous monitoring system 206 (FIG. 2) feeds historical vulnerability information, Sand, para [0090])
(d) generate an error rate by comparing the vulnerability classification data and vulnerability category data to historical data; and (The machine learning model 508 may learn through training by comparing recommendations to known outcomes, Sand, para [0091]).
(e) train the at least one neural network by adjusting one or more neural network parameters to reduce the error rate (At T2, the data extraction and analysis module 504 sanitizes the data and determines specific vulnerability information for use as a training set 506 to create and train machine learning model 508. The training set may include data patterns and sequences that are known to result in expected recommendations, Sand, para [0090])
However, Sand in view McCarthy of does not explicitly disclose the limitations:
(a) capture the systems data, the software data, and the software configuration data of the particular network;
(b) create a vulnerability database record comprising the systems data, the software data, the software configuration data, the vulnerability data, the attack signature data, the remediation agent data;
Oliphant discloses:
(a) capture the systems data, the software data, and the software configuration data of the particular network; (Security server 135 collects data from devices including the software installed on those devices, their configuration and policy setting and patches that have been installed, Oliphant, para [0021]).
(b) create a vulnerability database record comprising the systems data, the software data, the software configuration data, the vulnerability data, the attack signature data, the remediation agent data; (An SDK allows programmers to develop security applications that access the data collected in database 146. The applications developed with the SDK access information using a defined API to retrieve vulnerability, remediation and device status information available to the system. In the system, configuration information for each device make take the form of initialization files (. *ini, *conf), configuration registry (Windows registry on Microsoft WINDOWS operating systems) or configuration data held in volatile memory, Oliphant, para [0036], [0037]. This shows a database record, database 146 storing data).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand and McCarthy with Oliphant to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) with a virtual patching system (Oliphant) .It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand and McCarthy with Oliphant in order to prevent attacks occurring prior to completion of patch installation (See Oliphant, para [0036])
As per claim 6, Sand and McCarthy disclose the system of claim 2, wherein
However, Sand in view of McCarthy does not explicitly disclose the limitation:
the vulnerability is classified as: (i) a known vulnerability stored to the database, (ii) a potential vulnerability, or (iii) not a vulnerability
Oliphant discloses:
the vulnerability is classified as: (i) a known vulnerability stored to the database, (ii) a potential vulnerability, or (iii) not a vulnerability (Code for allowing access to first information from at least one first data storage identifying a plurality of potential vulnerabilities including at least one first potential vulnerability and at least one second potential vulnerability. Determining that the at least one networked device is actually vulnerable to at least one actual vulnerability based on the identified at least one configuration, utilizing the first information from the at least one first data storage identifying the plurality of potential vulnerabilities, Oliphant, claim 3. This describes "actual vulnerabilities", "potential vulnerabilities" and "removal of vulnerability").
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand and McCarthy with Oliphant to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) with a virtual patching system (Oliphant) .It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand and McCarthy with Oliphant in order to prevent attacks occurring prior to completion of patch installation (See Oliphant, para [0036])
As per claim 7, Sand and McCarthy disclose the system of claim 2, wherein
However, Sand in view of McCarthy does not explicitly disclose the limitation: the vulnerability is categorized into the following categories: (i) operating systems, (ii) system software, (iii) application software, or (iv) programming software
Oliphant discloses:
the vulnerability is categorized into the following categories: (i) operating systems, (ii) system software, (iii) application software, or (iv) programming software (A data structure describing a plurality of mitigation techniques for a portion of the mitigation techniques that correspond with a subset of the plurality of the vulnerabilities resulting from an operating system and an application indicated to be on the device, Oliphant, claim 16, the at least one configuration relating to at least one of an operating system or an application of the at least one networked device, and determining that at least one networked device is actually vulnerable to at least one actual vulnerability. The actual vulnerability being a function of the at least one of the operating system or the application of the at least one networked device, Oliphant, claim 13. This describes the vulnerabilities or actual vulnerabilities stemming from an application of a networked device)
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand and McCarthy with Oliphant to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) with a virtual patching system (Oliphant) .It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand and McCarthy with Oliphant in order to prevent attacks occurring prior to completion of patch installation (See Oliphant, para [0036])
As per claim 14, Sand and McCarthy discloses a system of claim 9, wherein the vulnerability is classified into one of the following:
However, Sand in view of McCarthy does not explicitly disclose the limitation:
(i) a known vulnerability stored to the database, (ii) a potential vulnerability or (iii) not a vulnerability
Oliphant discloses:
(i) a known vulnerability stored to the database, (ii) a potential vulnerability, or (iii) not a vulnerability (Code for allowing access to first information from at least one first data storage identifying a plurality of potential vulnerabilities including at least one first potential vulnerability and at least one second potential vulnerability. Determining that the at least one networked device is actually vulnerable to at least one actual vulnerability based on the identified at least one configuration, utilizing the first information from the at least one first data storage identifying the plurality of potential vulnerabilities, Oliphant, claim 3. This describes "actual vulnerabilities", "potential vulnerabilities" and "removal of vulnerability").
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand and McCarthy with Oliphant to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) with a virtual patching system (Oliphant) .It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand and McCarthy with Oliphant in order to prevent attacks occurring prior to completion of patch installation (See Oliphant, para [0036])
As per claim 15, Sand and McCarthy disclose a system of claim 9, wherein
However, Sand in view of McCarthy does not explicitly disclose the limitation:
the vulnerability is categorized into one of the following categories: operating systems, system software, application software, or programming software
Oliphant discloses:
the vulnerability is categorized into one of the following categories: operating systems, system software, application software, or programming software (A data structure describing a plurality of mitigation techniques for a portion of the mitigation techniques that correspond with a subset of the plurality of the vulnerabilities resulting from an operating system and an application indicated to be on the device. Oliphant, claim 16, the at least one configuration relating to at least one of an operating system or an application of the at least one networked device, and determining that at least one networked device is actually vulnerable to at least one actual vulnerability. The actual vulnerability being a function of the at least one of the operating system or the application of the at least one networked device, Oliphant, claim 13. This describes the vulnerabilities or actual vulnerabilities stemming from an application of a networked device).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand and McCarthy with Oliphant to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) with a virtual patching system (Oliphant) .It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand and McCarthy with Oliphant in order to prevent attacks occurring prior to completion of patch installation (See Oliphant, para [0036])
Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Sand et al. (US 20240236137 A1) hereinafter referred to as Sand in view of McCarthy et al. (US 20230421582 A1) in further view of Oliphant et al. (US 20150033323 A1), hereinafter referred to as Oliphant in further view of O'Brien et al. (US 20060010497 A1), hereinafter referred to as O'Brien.
As per claim 8, Sand, McCarthy and Oliphant disclose the systems of claim 7, wherein
However, Sand, McCarthy and Oliphant do not explicitly disclose:
the vulnerability is further categorized into the following subcategories: Microsoft Windows, macOS, Linux, device drivers, firmware, system utilities, security software, word processing software, spreadsheet software, graphic design software, database management software, communication software, integrated development environments, code editors, compiler, and debuggers
O'Brien discloses:
the vulnerability is further categorized into the following subcategories: Microsoft Windows, macOS, Linux, device drivers, firmware, system utilities, security software, word processing software, spreadsheet software, graphic design software, database management software, communication software, integrated development environments, code editors, compiler, and debuggers (The component may be an application running on an asset 106 such as, for example, a web browser, an operating system, a word- processing application, or any other suitable program, O'Brien, para [0013]. Any other suitable programs implies that any software type including system utilities, spreadsheet software, security software or design tools could be considered an asset).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand, McCarthy and Oliphant with O'Brien to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) and a virtual patching system (Oliphant) with remediation management system (O'Brien). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand, McCarthy and Oliphant with O'Brien in order to effectively identify vulnerabilities of an asset based on comparing the asset to content associated with vulnerabilities (See O'Brien, para [0013])
As per claim 16, Sand, McCarthy and Oliphant disclose a system of claim 15, wherein
However, Sand, McCarthy and Oliphant do not explicitly disclose:
the vulnerability is further categorized into one of the following subcategories: Microsoft Windows, macOS, Linux, device drivers, firmware, system utilities, security software, word processing software, spreadsheet software, graphic design software, database management software, communication software, integrated development environments, code editors, compiler, and debuggers (The component may be an application running on an asset 106 such as, for example, a web browser, an operating system, a word-processing application, or any other suitable program, O'Brien, para [0013]. Any other suitable programs implies that any software type including system utilities, spreadsheet software, security software or design tools could be considered an asset).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Sand, McCarthy and Oliphant with O'Brien to provide vulnerability scoring based on organization-specific metrics (Sand) and cybersecurity operations case triage groupings (McCarthy) and a virtual patching system (Oliphant) with remediation management system (O'Brien). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Sand, McCarthy and Oliphant with O'Brien in order to effectively identify vulnerabilities of an asset based on comparing the asset to content associated with vulnerabilities (See O'Brien, para [0013])
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAGHAVENDER CHOLLETI whose telephone number is (703) 756-1065. The examiner can normally be reached M-Th 7:30AM -4:30PM EST and variable Fridays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, RUPAL DHARIA can be reached on (571) 272-3880. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patentcenter for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Respectfully Submitted,
/RAGHAVENDER NMN CHOLLETI/Examiner, Art Unit 2492
/RUPAL DHARIA/Supervisory Patent Examiner, Art Unit 2492