Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This final office action is in response to the amendments filed 12/11/2025. In
which, claims 1, 11, and 20 have been amended, no claims have been cancelled, and claims 1-20 remain pending in the application.
Response to Amendment
The amendment filed on 12/11/2025 has been entered. See response to
amendments.
Response to Arguments
Applicant’s amendments and arguments are fully considered and are
persuasive however arguments are moot in view of new ground of rejection below.
With respect to applicant’s argument to the remaining dependent claims 2-
10, and 12-19 of the remark, the applicant is relying on the newly added amendments of the independent claims 1, 11, and 20. Please see Examiner’s response above and the detail of the rejection.
Examiner note: Examiner respectfully suggest the applicant to rewrite the misspelled word in the title “AUTOMATATED”. Appropriate correction is required.
Claim Objections
Claim 9 is objected to because the limitation “development team 106” refers to a number and the number should be enclosed within parentheses. See MPEP 608.01(m) (Form of Claims Reference characters corresponding to elements recited in the detailed description and the drawings may be used in conjunction with the recitation of the same element or group of elements in the claims. The reference characters, however, should be enclosed within parentheses so as to avoid confusion with other numbers or characters which may appear in the claims. Generally, the presence or absence of such reference characters does not affect the scope of a claim.”). Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1- 3, 7, 9, 11-13, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al. (US-12074897-B1 hereafter Sanchez), in view of Yellapragada et al. (US-20230205891-A1 hereafter Yellapragada), in further view of Srivastava et al. (US-20240330473-A1 hereafter Srivastava).
Regarding claim 1 Sanchez discloses a processor-implemented method for classifying a triage-related message related to a software application security technical problem, said method comprising (see Sanchez Col.15 lines38-41: “Computing system 800 can include ATC server 225 or the computing systems illustrated in FIG. 1 or 5, and broadly represents any single or multi-processor computing device or system capable of executing computer readable instructions.”, Col.4 lines51-54: “diagram 100 illustrating a model training system that can be used to train an alert triage classification model to identify cyber threats (hereinafter simply "threats").”):
storing, by one or more data processors, the triage-related message in a non-transitory computer-readable storage medium (see Sanchez Col.11 lines8-12: “The alert is then saved to alert database 230, as shown in FIG. 3. Saving an alert in alert database 230 broadcasts an event that can be listened to (e.g., the whole row is sent to the new service-an alert triage classifier application).”, Col.15 lines35-65: “In its most basic configuration, computing system 800 may include at least one processor 855 and a memory 860.”);
wherein the triage-related message is related to an already detected software application security technical problem (see Sanchez Col.9 lines31-32: “ATC server 225 also includes an alert database 230 with existing alerts 235.”);
generating, by the one or more data processors, a triage-related classification for the triage-related message by applying a processor-implemented machine learning model that has been trained to analyze the text of the triage-related message with respect to predetermined approval status classifications (see Sanchez Col. 5 lines1-13: “the ATC model can be trained using a machine learning technique ( e.g., via a form of supervised training), where the ATC model is trained using a set of training data labeled with truth labels ( e.g., whether pre-existing detection messages are threats or not threats based on certain detection characteristics and other user-configurable detection parameters observation record in the training dataset can include a set of independent variables representing the ATC model's inputs and a set of target variables ( e.g., the truth labels) representing the ATC model's desired output(s). The ATC model is then trained to accurately predict the truth label values based on the input features of the observation records.”, Col 8 lines4-14: “ATC model 170 is sufficiently trained ( e.g., when ATC model 170 satisfies a model evaluation criterion based on an evaluated dataset), ATC model 170 is deployed to a machine alert triage classification system 180. Machine alert triage classification system 180 may be used to make threat assessment decisions for detection characteristics 150 collected from real-world machines Using trained ATC model 170, machine alert triage classification system 180 generates a threat classification 195 to identify whether a given detection message is a threat or not a threat.”);
wherein the generated triage-related classification indicates approval status for the triage-related message (see Sanchez Col. 8 lines11-13: “Using trained ATC model 170, machine alert triage classification system 180 generates a threat classification 195 to identify whether a given detection message is a threat or not a threat.”). Examiner interpret that the approval status generated by the ATC model 170 is confirmation of the threat classification of the given message.
Sanchez appear to be silence on wherein the software application security technical problem is to be addressed within a timeframe set by predetermined security severity level criteria; and
sending, by the one or more data processors, the generated triage-related
classification to a user for remediating the software application security technical problem within the timeframe set by the predetermined security severity level criteria.
However, Yellapragada discloses wherein the software application security technical problem is to be addressed within a timeframe set by predetermined security severity level criteria (see Yellapragada par.145: “Identifying and prioritizing
security findings is directly related to the amount of remediation efforts and in-tum to the time needed for these efforts. Embodiments herein ( e.g., methods and models) allow an enterprise(s) to prioritize various security findings, e.g., to allow for a focus (e.g., by a security platform) on a proper subset of (e.g., the most important) one or more software application stacks of the enterprise. Embodiments herein (e.g., methods and models) allow an enterprise(s) to prioritize various security findings, e.g., to allow for a focus ( e.g., by a security platform) on a proper subset of ( e.g., the most important) remediation efforts.”, par.155: where a vulnerability(ies) affects certain software, the vendor of the software publishes an update and/or other remediation suggestions to prevent exploitation, e.g., such that applying the patches or remediations would mitigate the risks associated with the vulnerabilities. Thus, the availability of exploits and/or remediations changes the risk associated with a vulnerability. In certain embodiments, exploits and remediations are time-bound and thus increase or decrease in risk over time. For example, an exploit for a multiple (e.g., 5) year old vulnerability might not be relevant anymore and hence the risk of the vulnerability is lower. On the other hand, non-availability of a patch for a new vulnerability would increase the risk as an exploit is imminent in certain embodiments. In certain embodiments, the age of a vulnerability plays a role as a threat actor might want to target newer vulnerabilities, e.g., for which remediations might not exist or have not been implemented.”). Examiner interpret that vulnerability is time bound set by the risk associated specially when there is no available know remediation;
It would be obvious to combine Sanchez teaching “processes that implement a machine learned alert triage classification system. One such method involves obtaining a training dataset of a plurality of classified records, where each classified record in the training dataset includes detection characteristics data of a set of machines and threat classification results produced by performing an alert triage classification of the detection messages associated with the set of machines.”, (see Sanchez Col.1 lines49-57), with Yellapragada teaching “( e.g., methods and models) allow an enterprise(s) to prioritize various security findings, e.g., to allow for a focus (e.g., by a security platform) on a proper subset of (e.g., the most important) one or more software application stacks of the enterprise. Embodiments herein (e.g., methods and models) allow an enterprise(s) to prioritize various security findings, e.g., to allow for a focus ( e.g., by a security platform) on a proper subset of ( e.g., the most important) remediation efforts.”, (see Yellapragada par.146).
Sanchez in view of Yellapragada appear to be silence on however Srivastava teaches sending, by the one or more data processors, the generated triage-related classification to a user for remediating the software application security technical problem within the timeframe set by the predetermined security severity level criteria (see Srivastava par.0016: “triaging” refers to a process of prioritizing (timeframe), categorizing, classifying, or assigning remediation action items to vulnerabilities, e.g., based on their severity and potential impact. Triaging objectives may include ensuring that the most important or significant vulnerabilities are addressed as soon as possible, and allocating resources efficiently to mitigate security risks.”, par.0043: “the triaging module 208 may invoke an automatic triaging function to enable the vulnerability to be triaged, e.g., marked as a false positive (the generated triage-related classification) with no user input or review required. As another example, where a positivity classification (the generated triage-related classification) is generated, but the machine learning model returned a result with a confidence level that is below the predefined threshold, the triaging module 208 may cause presentation(sending) of a triaging element in a user dashboard, enabling the user 128 to confirm or adjust the positivity classification.”,
wherein the processor-implemented machine learning model generates an external classification category and an internal classification category (see Srivastava par.0040: “The classification module 206 is configured to determine the positivity classification (an external classification) for a given vulnerability based on the probability score generated by the machine learning model. As alluded to above, the probability score may be used directly, e.g., where any vulnerability with a probability score of 0.6 or higher is automatically classified as a false positive (internal classification), or indirectly, e.g., where the probability score is further processed to arrive at a final score. The classification module 206 may cause the positivity classification to be stored in association with the vulnerability,”),
wherein the external classification category provides triage resolution information to an external software development team for the software application security technical problem, (see Srivastava par.0074: “if the confidence level is equal to or below the threshold (e.g., at or below 60%), the vulnerability management system 122 does not automatically finalize triaging. Instead, the vulnerability management system 122 invokes a user confirmation function (operation 514). When the user confirmation function is invoked, the positivity classification generated by the vulnerability management system 122 may be presented to the user 128 for review. For example, and as shown at operation 516 in FIG. 5, a triaging element may be presented in the dashboard 308 with an indication that the vulnerability has been predicted to be a false positive, together with the confidence level. One or more candidate triaging reasons may also be generated and presented to the user,”) wherein the internal classification category provides more detailed triage resolution information than the external classification category (see Srivastava par.0043: ““the triaging module 208 may invoke an automatic triaging function to enable the vulnerability to be triaged, e.g., marked as a false positive (the internal classification) with no user input or review required.”, par.0053: “Historical triaging data 310 may be fed from the databases 126 to the vulnerability management system 122, e.g., to train or retrain a machine learning model (as is further described with reference to FIG. 7 and FIG. 8). The historical triaging data 310 may include, for example, a vulnerability description, a severity score, a triaging status, and a triaging reason stored in association with a given vulnerability.” Examiner interpret that the internal classification have more detailed to validate internal classification without external input), wherein the internal classification category and the external classification category are provided to a system that generates a triage policy based on the internal classification category and the external classification category, wherein the internal classification category is not provided to the external software development team (see Srivastava par.0072-0074: “The vulnerability management system 122 determines the vulnerability to be a false positive at operation 506, e.g., using the classification module 206 as described above. Turning now to decision operation 508, the vulnerability management system 122 determines whether a confidence level associated with the determined positivity classification exceeds a predefined threshold, e.g., a threshold of 60%. If the confidence level exceeds the threshold, the vulnerability management system 122 invokes an automatic triaging function at operation 510. In response to invoking of the automatic triaging function, the vulnerability management system 122 automatically triages the vulnerability by flagging it as a false positive and not generating any remediation tasks for the user 128. Referring again to decision operation 508, if the confidence level is equal to or below the threshold (e.g., at or below 60%), the vulnerability management system 122 does not automatically finalize triaging. Instead, the vulnerability management system 122 invokes a user confirmation function (operation 514). When the user confirmation function is invoked, the positivity classification generated by the vulnerability management system 122 may be presented to the user 128 for review. For example, and as shown at operation 516 in FIG. 5, a triaging element may be presented in the dashboard 308 with an indication that the vulnerability has been predicted to be a false positive, together with the confidence level. One or more candidate triaging reasons may also be generated and presented to the user,”, par.0128: “wherein the positivity classification indicates that the vulnerability is a true positive, the operations further comprising: in response to the determining of the positivity classification for the vulnerability, automatically generating a remediation task corresponding to the vulnerability; storing the remediation task (Policy) in association with the vulnerability in a database; and causing presentation of the remediation task in the user interface.” Examiner interpret that based on the internal and external label a remediation task that validates the premise of the reason on why the certain vulnerability should be address and mitigate if the vulnerability is encounter again. Which is consistent with applicant instant application [par.0024] The action enforcer 112 can be thought of as a system that converts the reason that was fed into it, to a policy that validates the premise of the reasons.).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada teaching with Srivastava teaching “A vulnerability management application according to examples described herein may remove tedious processes and expedite security-related decision-making. For instance, the dashboard may provide triaging elements that are user-selectable to confirm or adjust the positivity classification for the vulnerabilities. Furthermore, the vulnerability management system 122 may invoke automatic functions based on the positivity classification for a particular vulnerability and other factors, such as a confidence indicator associated with the positivity classification (e.g., a confidence level). Automatic functions may include automatic triaging of certain vulnerabilities, automatic remediation task generation, or the generation of candidate triaging reasons”, (see Srivastava par.0035).
Regarding claim 11 is a system claim that recites similar limitations as the method claim 1 and is being rejected based on the same rational as claim 1.
Regarding claim 20 is the non-transitory machine-readable medium claim that recites similar limitations as the method claim 1 and is being rejected based on the same rational as claim 1. (See Sanchez Col.17 line 14 and Col.19 line 17 for teaching of non-transitory medium).
Regarding claim 2 Sanchez in view of Yellapragada and Srivastava disclose the method of claim 1, Yellapragada further discloses wherein an already detected software application security technical problem includes automatically scanning software images or artifacts contained in a third party software product. (see Yellapragada par.0082 : “the analyzer 822 is to perform one or any combination of the following analysis functionalities for an imaging technique: (i) obtaining a list of installed applications (e.g., software) and/or packages (e.g., software packages), e.g., and their versions. In certain embodiments, this is obtained by scanning ( e.g., certain locations of) the snapshot. In certain embodiments, the analyzer 822 compares the list of installed applications and/or packages against threat intelligence data (e.g., from threat feed 820) to obtain a list of vulnerabilities against these installed applications and/or packages.”, par.148 “there are threat intelligence sources, e.g., from a third party relative to an enterprise. In certain embodiments, the threat intelligence sources provide threat intelligence data, for example, where the threat intelligence data is provided under a category system ( e.g., and corresponding identifier), e.g., according to a standard. In certain embodiments, threat intelligence data is (i) common vulnerabilities and exposures (CVE) data, e.g., that provides identifiers for vulnerabilities, (ii) common weakness enumeration (CWE) data, e.g., that provides identifiers for software and/or hardware weaknesses, (iii) common vulnerability scoring system (CVSS) data, e.g., a standardized scoring system to score the severity of vulnerabilities, (iv) common weakness scoring system (CWSS) data.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada and Srivastava teaching of claim 1, with Yellapragada teaching “identifying and defining the relations between multiple types of threat intelligence data (e.g., from different threat intelligence sources) (e.g., the corresponding identifiers) provides a rich feature set that is input into a machine learning model, e.g., to the model infers the exploitability (e.g., a corresponding exploitability score) of each of the threat intelligence data ( e.g., CVE, CWE and CPE).”, (see Yellapragada par.153).
Regarding claim 12 is a system claim that recites similar limitations as the method claim 2 and is being rejected based on the same rational as claim 2.
Regarding claim 3 Sanchez in view of Yellapragada and Srivastava disclose the method of claim 2, Yellapragada further disclose further comprising enumerating vulnerabilities in the third party software product artifacts based on the scanned software images or artifacts. (See Yellapragada par.154 : “FIG. 14 is a block diagram illustrating a common platform enumeration (CPE) identifier 1404, a common weakness enumeration (CWE) identifier 1406, and a common vulnerability scoring system (CVSS) identifier 1408 that are associated with a single common vulnerabilities and exposures (CVE) identifier 1402 and the relationships there between according to some embodiments. Thus, in certain embodiments, a security platform ( e.g., security platform 100 herein) (e.g., a machine learning model thereof) is to associate (e.g., identify and/or define) the relations between multiple types of threat intelligence data, e.g. , to infer the exploitability (e.g., a corresponding exploitability score) of each of the threat intelligence data ( e.g., CVE, CWE and CPE).”, par.131: “security platform 100 includes a collector 818 that connects with various tools to extract data related to assets, vulnerabilities, logs, etc. In certain embodiments, when an imaging technique or method is enabled, the collector 818 connects with the API( s) 804 of one or more cloud providers 806H (e.g., CSPs) to carry out the extraction of artifacts related to the asset(s) ( e.g., instance(s)) in the monitored CSP accounts.”). Examiner construed that the enumerating of the vulnerabilities in the third party software product such as CPE, CWE, CVSS are associated and enumerated with CVE that are then enumerated by the security platform on the extracted artifacts.
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada and Srivastava teaching of claim 2, with Yellapragada teaching “the threat intelligence data includes common vulnerabilities and exposures (CVE) data. In certain embodiments, the common vulnerabilities and exposures data comprises common platform enumeration data, common weakness enumeration data, common vulnerability
scoring system data, or any single or combination of these.”, (see Yellapragada par.149).
Regarding claim 13 is a system claim that recites similar limitations as the method claim 3 and is being rejected based on the same rational as claim 3.
Regarding claim 7 Sanchez in view of Yellapragada and Srivastava disclose the method of claim 1, Srivastava further teaches wherein the detected vulnerability being addressed includes accessing a dashboard through the vulnerability reporting platform and triaging the vulnerability by providing a textual explanation. (see Srivastava par.0074: “the vulnerability management system 122 may be presented to the user 128 for review. For example, and as shown at operation 516 in FIG. 5, a triaging element may be presented in the dashboard 308 with an indication that the vulnerability has been predicted to be a false positive, together with the confidence level. One or more candidate triaging reasons may also be generated and presented to the user, e.g., based on historical triaging data stored in the databases 126. For example, in the case of a false positive, the vulnerability management system 122 may automatically extract, from data records in the databases 126, two or three triaging reasons most commonly provided or selected for false positive vulnerabilities with similar details, such as similar vulnerability descriptions or severity scores. The user 128 may review the candidate triaging reasons ( e.g., the proposed reasons why the vulnerability should be marked as a false positive) and select one or more of them via the dashboard 308, thereby facilitating the triaging process.”). Examiner interpret that the reasons presented to the user as the textual explanations based on historical triaging data in the database.
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada and Srivastava teaching of claim 1, with Srivastava teaching “The method 400 includes presenting, in a user interface, output data representing the positivity classification for the vulnerability (operation 414). The output data may include details of the vulnerability, e.g., its description or severity score, together with the positivity classification and the calculated confidence indicator. The output data may include various other data and graphical elements, e.g., presented via the dashboard 308,
such as triaging indicators or suggested triaging reasons”, (see Srivastava par.0065-66).
Regarding claim 17 is a system claim that recites similar limitations as the method claim 7 and is being rejected based on the same rational as claim 7.
Regarding claim 9 Sanchez in view of Yellapragada and Srivastava disclose the method of claim 1, Srivastava further teaches wherein the textual explanation from the software development team 106 is validated by the processor-implemented machine learning model. (See Srivastava par.0030: “Machine learning is used to classify a vulnerability as a true positive or a false positive. The classification of a vulnerability as either a true positive or a false positive is referred to herein as a "positivity classification.”, par.0052-53 : “The databases 126 may store information such as registered components ( e.g., applications and security tools registered to the account of the user 128), security tool configurations, scan histories, vulnerability results, machine learning model predictions including positivity classifications, or confidence level data. The databases 126 may also store remediation tasks and triage status information, e.g., for a particular vulnerability, an indication of whether the vulnerability has been triaged and, if so, further information such as a triaging reason. Historical triaging data 310 may be fed from the databases 126 to the vulnerability management system 122, e.g., to train or retrain a machine learning model (as is further described with reference to FIG. 7 and FIG. 8). The historical triaging data 310 may include, for example, a vulnerability description, a severity score, a triaging status, and a triaging reason stored in association with a given vulnerability.”, par. 0131: “a data record of a vulnerability, the data record generated by an information technology (IT) security tool and comprising a vulnerability description; automatically generating, by the one or more processors, an input vector based on the vulnerability description; generating, by a machine learning model and using the input vector, a probability score for the vulnerability; automatically determining, by the one or more processors and based on the probability score for the vulnerability, a positivity classification for the vulnerability; and causing presentation, by the one or more processors, in a user interface, of output data representing the positivity classification for the vulnerability.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada and Srivastava teaching of claim 1, with Srivastava teaching “the trained
machine learning program suitable for use in automatic assessment or classification of vulnerabilities is a logistic regression model. Historical triaging data may be exported from various security tools. A training data set may, for example, include thousands of training records, each containing the following: Vulnerability identifier; Vulnerability description; Severity level (e.g., numerical score); Triaging status; and Triaging reason.”, (see Srivastava par.0102-0109).
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al. (US-12074897-B1 hereafter Sanchez), in view of Yellapragada et al. (US-20230205891-A1 hereafter Yellapragada), in view Srivastava et al. (US-20240330473-A1 hereafter Srivastava), in further view of Misra et al. (US-20250053662-A 1 hereafter Misra).
Regarding claim 4 Sanchez in view of Yellapragada and Srivastava disclose the method of claim 1, Sanchez in view of Yellapragada and Srivastava do not explicitly teach however Misra teaches wherein the software application security technical problem to be addressed within a timeframe further comprising setting the timeframe based upon a service level agreement defined by the organization's security policy. (See Misra par.0078-0080 : “the purpose of integrating the vulnerability management platform formula, practices, and functionalities into the vulnerability management platform is to foster better collaboration between security, development, and incident response teams. By linking security-related knowledge directly to tickets and code commits, the organization can improve incident response capabilities by providing relevant security information within the ticket tracking system, enabling teams to address security issues efficiently;…. The first response SLA provides an acceptable time limit within which the development and security teams should acknowledge the receipt of a security concern or finding. The resolution SLA defines the time limit for resolving or mitigating the identified security concern, and specifies the maximum time allowed for the development and security teams to implement a fix or apply appropriate security measures. The developer SLA specifically pertains to the developers' responsibilities in addressing security findings, and defines the time allocated to the development team for implementing security fixes…. each of the plurality of SLAs is associated with different priority levels, including Critical, High, Medium, and Low, corresponding to different severity levels of security findings/issues. Each of the plurality of SLAs is also provided with a time limit configuration, and global settings… by offering SLA configuration capabilities, the vulnerability management platform enables organizations to set clear expectations and establish time limits for acknowledging and resolving security concerns based on their severity levels”.) Examiner construed that the timeframe to address the technical problem is define by the organization defined SLA such as the first response, resolution, and developer SLA.
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada and Srivastava teaching of claim 1, with Misra teaching “The system further comprises an SLA configuration module configured to define and configure a plurality of Service Level Agreements (SLAs) at an application or sub-application level to set acceptable timelines for acknowledging and mitigating security concerns.”, (see Misra par.0037).
Regarding claim 14 is a system claim that recites similar limitations as the method claim 4 and is being rejected based on the same rational as claim 4.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al. (US-12074897-B1 hereafter Sanchez), in view of Yellapragada et al. (US-20230205891-A1 hereafter Yellapragada), in view of Srivastava et al. (US-20240330473-A1 hereafter Srivastava), in view of Misra et al. (US-20250053662-A 1 hereafter Misra), in further view Thimmegowda (US-11593477-B1 hereafter Thimmegowda).
Regarding claim 5 Sanchez in view of Yellapragada, Srivastava, and Misra teach the method of claim 4, Sanchez in view of Yellapragada, Srivastava, and Misra appear to be silence on further comprising starting the service level agreement timer that counts down time remaining to fix the vulnerability.
However, Thimmegowda teaches further comprising starting the service level agreement timer that counts down time remaining to fix the vulnerability. (see Thimmegowda Col.202 lines41-63: “the processing of data ingested by the IT and security operations application 1602 includes assigning a severity level to events or incidents. As indicated above, a severity level assigned to a given event may generally define an expected impact or importance of the event to the security or operation of an associated IT environment… In some embodiments, each severity level is associated with a respective service level agreement (SLA), which may be defined as an amount
of time that is permitted to pass before the actions taken by analysts relative to the event or incident are considered late. In some embodiments, the use of SLAs in connection with severity levels has at least two purposes: to track an amount of time remaining before an event or incident is considered due and to track an amount of time that users (also referred to as "approvers" in this context) have to approve an action
before the approval is escalated to another user.”). Examiner interpret that the security operations application assign with it severity level to a given event. Each severity level is associated with SLA that define the amount time, also the SLA track down the amount of time remaining on a given incident which examiner interpret as a start of a timer that counts down time before its late.
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada, Srivastava, and Misra teaching of claim 4, with Thimmegowda teaching “an IT and security operations application 1602 also executes various processes relative to events (e.g., executes automated actions, executes playbooks, etc.) in an order that is defined at least in part based on a severity level associated with each event, where higher severity events generally are processed ahead of lower severity events.”, (see Thimmegowda Col.202 lines64-67 and Col.203lines1-2).
Regarding claim 15 is a system claim that recites similar limitations as the method claim 5 and is being rejected based on the same rational as claim 5.
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al. (US-12074897-B1 hereafter Sanchez), in view of Yellapragada et al. (US-20230205891-A1 hereafter Yellapragada), in view of Srivastava et al. (US-20240330473-A1 hereafter Srivastava), in further view Thimmegowda (US-11593477-B1 hereafter Thimmegowda).
Regarding claim 6 Sanchez in view of Yellapragada and Srivastava disclose the method of claim 1, Sanchez in view of Yellapragada and Srivastava appear to be silence on further comprising indicating by a software development team through a vulnerability reporting system how the detected vulnerability is to be addressed.
However, Thimmegowda teaches further comprising indicating by a software development team through a vulnerability reporting system how the detected vulnerability is to be addressed. (See Thimmegowda Col.184 lines65 - Col.185 lines1-2 : “an IT and security operations application 1602 enables security teams and other users to automate repetitive tasks, to efficiently respond to security incidents and other operational issues, and to coordinate complex workflows across security teams and diverse IT environments.”, Col.185 lines44-60: “using the application environment 205, the IT and security operations application 1602 includes various custom web-based interfaces ( e.g., provided by a mission control service 1608) that may or may not leverage one or more UI components provided by the application environment 205. In this context, "mission control" refers to any type of interface or set of interfaces that enable users broadly to obtain information about their IT environments, configure automated actions, playbooks, etc., and perform other operations related to IT and security infrastructure management. The IT and security operations application 1602 may further include middleware business logic (including, for example, an incident management service 1628, a threat intelligence service 1630, an artifact service 1632, a file storage service 1634, and an orchestration, automation, and response (OAR) service 1616) implemented on a middleware platform of the developer's choice”).”, Col.190 lines 45-52: “The IT and security operations application 1602 may be configured with a number of default statuses, such as "new" or "unknown" to indicate incidents that have not yet been analyzed, "in progress" for incidents that have been assigned to an analyst and are under investigation, "pending" for incidents that are waiting input or action from an analyst, and "resolved" for incidents that have been addressed by an assigned analyst.”). Examiner interpret that the indication of how to address the detected vulnerability is through the security operations that enables security teams to efficiently response to the security incidents and coordinate complex workflows across security teams. Also, security application 1602 include middleware business logic that includes an incident management service 1628, and an orchestration, automation, and response (OAR).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada and Srivastava teaching of claim 1, with Thimmegowda teaching “The IT and security operations application 1602, for example, uses the application environment 205 to interface with the data intake and query system 108 to obtain relevant data, process the data, and display it in a manner relevant to the IT operations context. As shown, the IT and security operations application 1602 further includes additional backend services, middleware logic, front-end user interfaces, data stores, and other computing resources, and provides other facilities for ingesting use case specific data and interacting with that data.”, (see Thimmegowda Col.185 lines34-43).
Regarding claim 16 is a system claim that recites similar limitations as the method claim 6 and is being rejected based on the same rational as claim 6.
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al. (US-12074897-B1 hereafter Sanchez), in view of Yellapragada et al. (US-20230205891-A1 hereafter Yellapragada), in view of Srivastava et al. (US-20240330473-A1 hereafter Srivastava), in further view Liu et al. (CN-116680699-A hereafter Liu).
Regarding claim 8 Sanchez in view of Yellapragada and Srivastava disclose the method of claim 1, Sanchez in view of Yellapragada and Srivastava appear to be silence on wherein the triaging the vulnerability by providing the textual explanation includes extending the service level agreement time for a period until a prespecified triage period expires.
However, Liu teaches wherein the triaging the vulnerability by providing the textual explanation includes extending the service level agreement time for a period until a prespecified triage period expires. (See Liu par.195 : “The risk degree of the loopholes is generally divided into 5 latitudes of serious, high-risk, medium-risk, low-risk and information, the latitudes are marked by a system, the loophole responsible person responds according to SLAs with different risk levels, for example, the serious loopholes need to be repaired within 12 hours and concurrent versions, and the medium-risk loopholes can be repaired within orderly time, for example, the loopholes can be repaired in the next version or the next security update; for low-risk vulnerabilities, the repair may be performed over a longer period of time, such as within a few months of the future.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada and Srivastava teaching of claim 1, with Liu teaching “The system can also work cooperatively with a threat information aggregation engine, and the risk degree of the loopholes can be estimated more accurately according to the analysis result of the threat information so as to improve the accuracy of the priority of the loopholes. Finally, the vulnerability priority ranking engine can aggregate, analyze and output the ranked results, so that a vulnerability responsible person can conveniently solve relevant vulnerabilities in a specified SLA, feedback is provided, and the accuracy of vulnerability priority ranking is continuously improved.”, (see Liu par.177).
Regarding claim 18 is a system claim that recites similar limitations as the method claim 8 and is being rejected based on the same rational as claim 8.
Claims 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al. (US-12074897-B1 hereafter Sanchez), in view of Yellapragada et al. (US-20230205891-A1 hereafter Yellapragada), in view of Srivastava et al. (US-20240330473-A1 hereafter Srivastava) in further view of Parla et al. (US-20250097237-A1hereafter Parla).
Regarding claim 10 Sanchez in view of Yellapragada, and Srivastava teach the method of claim 9, Sanchez in view of Yellapragada, and Srivastava appear to be silence on however Parla teaches wherein the processor-implemented machine learning model includes a large language model (LLMs) or GPT 4 model or LlaMa model for generating the internal classification category and the external classification category. (see Parla par.0048-49: “When combined with penetration testing to identify additional variants of malware, the integration of a Large Language Model (LLM) brings an arsenal of capabilities to the table for thwarting potential cybersecurity attacks. Leveraging its code summarization processing capabilities, an LLM can meticulously analyze the patterns and attributes of these new malware variants. This analytical prowess enables the identification of nuanced similarities and trends even in previously successful penetrations of the security system. As a result, the LLM contributes to early detection and classification, ensuring that security teams can swiftly recognize potential threats and respond with targeted countermeasures. a threat management service can use LLM to identify differences between original malware and newly detected malware. This helps avoid the need to predict and test numerous modifications. The differences can then be used to explain/classify what techniques have been employed by the attacker to bypass detection, to identify remediation techniques to thwart the potential malware attack taking place, or future network threats. This also limits the number of possible iterations in predicting further modifications that may be attempted.”, par.0052: “LLMs can also lend a hand in automating response recommendations. Drawing from their extensive analysis, LLMs can offer actionable suggestions for responding to the identified malware variants. This can encompass fine-tuning firewall rules, adjusting intrusion prevention settings, and modifying other security policies to defuse threats preemptively. Additionally, LLMs contribute to the training of adaptive security systems. By generating a diverse array of simulated attack scenarios, they enable the security management system to refine its ability to detect and combat emerging threats and continually defuse threats preemptively.”). Examiner interpret that the internal classification category is the machine learning capabilities to analyze the patterns and attributes of new malware variants that enable the LLM the identification of nuanced similarities and trends event in previously penetrations, that contributes to early detection and classification, Examiner construed that the external classification category is that LLM can offer actionable automated response suggestion for the identified malware variants.
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Sanchez in view of Yellapragada and Srivastava teaching of claim 9, with Parla teaching “the predictive analysis capability of LLMs comes into play, as they can forecast the possible
impact and propagation of these new malware variants. By cross-referencing their attributes with historical attack data, LLMs aid security teams in assessing risks and prioritizing responses.”, (see Parla par.0050).
Regarding claim 19 is a system claim that recites similar limitations as the method claim 10 and is being rejected based on the same rational as claim 10.
Conclusion
The prior art made of record and not relied upon is considered pertinent to
applicant's disclosure:
Kumar et al. (US-20250023909-A1) data protection system that implements an SVM-based classifier and uses machine learning to detect cyber-attacks or other security threats to a data protection system in advance to notify the user of possible attacks and also instigate any counter attacks to the best possible extent. process 300 begins with gathering labeled data, 302, which includes data related to past security threats, their impact, and the associated remediation steps. there are three categories of threat issues that are classified. The first category is self-healable issues, where the SVM model detects a threat and takes automated steps to fix the issue. These steps are predefined in the training dataset and include actions such as taking a backup of data, removing the problematic code (e.g. virus or ransomware), and applying known steps, KBAs (Knowledge-based articles), or documentation to resolve the issue. The second category is manual fixing by the user. In this case, the SVM model detects a threat but cannot automatically remediate the issue. Instead, the model provides recommendations and guidance to the user on how to fix the issue manually.
Madison et al. (US-20240338455-A1) may detect a current alert. For example, the system may detect a current alert in a computing environment. May determine a current digital artifact corresponding to the current alert. For example, the system, after identifying a current alert, may determine a current digital artifact corresponding to the current alert, wherein the current digital artifact comprises digital forensic evidence. may determine a known vulnerability. For example, the system may determine a known vulnerability, wherein the known vulnerability comprises a known digital artifact and a public risk score may determine a risk score for the known vulnerability by leveraging the rate of frequency of the known vulnerability within the computing environment. May receive a plurality of proposed patches. For example, the system may receive a plurality of proposed patches, may generate for display, on a user interface, a ranking of alerts. For example, the system may generate for display, on a user interface, a ranking of alerts, wherein the ranking of alerts comprises the current alert and the plurality of proposed patches, wherein the current alert is sorted by the risk score, wherein the plurality of proposed patches is sorted by a popularity metric, and wherein the popularity metric describes community support for each proposed patch in the plurality of proposed patches.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUILIO MUNGUIA whose telephone number is (571) 270-5277. The examiner can normally be reached M-F 9:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUILIO MUNGUIA/Examiner, Art Unit 2497 /ELENI A SHIFERAW/Supervisory Patent Examiner, Art Unit 2497