Prosecution Insights
Last updated: April 19, 2026
Application No. 18/320,105

SECURITY RISK MANAGEMENT ENGINE IN A SECURITY MANAGEMENT SYSTEM

Non-Final OA §103§112
Filed
May 18, 2023
Examiner
POUDEL, SAMIKSHYA NMN
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
8 granted / 18 resolved
-13.6% vs TC avg
Strong +80% interview lift
Without
With
+80.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
29 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
16.2%
-23.8% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/04/2025 has been entered. Response to Amendment In the response filed on 11/04/2025. The applicant amended claims 1-2, 5-9, 11, 14, 16, 18 and 19 are amended. No claims were added. Response to Arguments With respect to 35 U.S.C.§112(b): Applicant’ claim amendments and remarks filed on 11/04/2025 have been fully considered and overcame 112(b) rejections on claims 1, 11, and 16 as presented in the final office action filed 08/04/2025. Therefore, 112(b) rejections have been withdrawn. With respect to 135 U.S.C.§102 and §103 rejections: Applicant's arguments filed on 11/04/2025 have been received and entered. Applicant's arguments with respect to the newly amended independents “Claim Rejections - 35 USC § 103” remarks pages 4-7, have been considered. Applicant argues that Green in view of Griffin fails to teach amended limitations directed to a “contextual security matrix (CSM)” including that the CSM is a structured data representation mapping security issues to contextual information instances with contextual scores, and that Green is allegedly limited to “Static lists/ rankings”, rule driven scoring and lacks a model driven matrix artifact, and the combining Griffin with Green does not complete deficiencies. However, examiner notes that Green teaches algorithmically determining risk scores and modifying such scores based on multiple factors (e.g., Normalization of risk data from multiple sources, base risk determination, modification/escalation rules, mitigation/environmental factors, impact assessment, and calculation of overall risk using formulas. Examiner acknowledged the applicant’s perspective and applicant’s argument but are moot because the claim amendment introduces new claim limitations that have not previously been considered. Therefore, the new 103 ground of rejection relies on new references in combination as presented below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 recite limitations “accessing a security issue..” “identifying contextual information associated with the security issue” and " a contextual security matrix (CSM) that is a structured data representation that maps a first security issue to a first instance of the contextual information having a first contextual score” creates the internal ambiguity. It is not clear that if “first security issue “ the same as “the security issue” accessed earlier or different issue in the matrix and the claim later uses “the security issue” again. Thus, creating unclear relationship between “a security issue” and “a first security issue”. Examiner suggest to clear the scope of these limitations. Dependent claims are also rejected for inheriting the deficiencies set forth above for independent claims. Appropriate correction is required. Claims 1, 11, and 16 recites limitations “the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information”, which reads like the matrix itself contains risk scores (outputs) rather being used to compute them. It is unclear whether the CSM stores risk scores or is used to generate risk scores. “CSM comprises CSM based risk score” can be internally inconsistent with later “generating CSM based risk scores”. Examiner suggest to clear the scope of these limitations. Dependent claims are also rejected for inheriting the deficiencies set forth above for independent claims. Appropriate correction is required. Claim Objections Regarding claim 2, Claim 2 is objected to because of the following informalities: In claim 2 , “a contextual security matrix model generator is, associated with a security risk management engine, supports generating the CSM model associated with security issue data, the contextual information, and recommended remediation action data of the CSM” is grammatically broken (i.e., missing proper punctuation and verb structure) and makes the scope of claim 2 unclear. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Green (US 20190312905 A1) in view of Shubhabrata (US 9692778 B1). Regarding claim 1, Green teaches a computerized system comprising: one or more computer processors; computer memory storing computer-useable instructions that, when used by the one or more computer processors, cause the one or more computer processors to perform operations, the operations (Green, a non-transitory computer readable medium is disclosed comprising instructions for determining a secured system security risk score. The instructions may cause the system to execute a method, [0006] one or more processors, for executing program instructions, [0071]) comprising: accessing a security issue associated with a computing device in a computing environment (Green, receiving, on an electronic network, security data corresponding to a security vulnerability of each of a plurality of servers, each of the plurality of servers being associated with a secured system, [0004] Fig 22, At step 2905, on an electronic network, security data are received corresponding to at least one security vulnerability associated with each of a plurality of servers, each of the plurality of servers being associated with a secured system, [0063]); identifying contextual information associated with the security issue, wherein the contextual information comprises a computing environment configuration or state that affects a security exposure of the security issue on the computing environment (Green, Systems detects and categorizes security vulnerabilities, and modify the assessed risk level of data sources, servers, Internet of Things (IoT) devices, systems, etc., based on the categorization. The assessed risk level are modified over time based upon predetermined rules, such as the time since discovery, and mitigation steps taken, [0034] FIG. 3, mitigation steps are taken based on risks and impacts that have been identified such as applying mitigation at the server level since it is a result of the environment where the server resides and the protection system installed on the server. The mitigation environment reflects varying levels of network, server, and data protection. For example, a high-risk environment 905 is protected by a few risk mitigating components such as white lists or a demilitarized zone/perimeter network. A server obtains a medium mitigated risk by being associated with a medium-risk environment 910, which add additional mitigating components such as putting the server behind a first firewall, exposing it to antivirus protection, etc. [0045] Risk levels of servers can be upgraded and downgraded. For example, a medium-risk server 1025 can be moved to a high-risk environment 905, and the server can be upgraded to a high-risk server 1030, [0046])[Examiner interprets that system detecting and categorizing vulnerabilities by evaluating factors such as whether the server is in DMZ versus firewall (i.e., contextual information such as environment configuration or state), then modifying the overall assessed risk level accordingly as limitation above]; accessing a contextual security matrix (CSM) that is a structured data representation that maps a first security issue to a first instance of the contextual information having a first contextual score (Green, may receive a plurality of security ratings from different data sources, which may be normalized to a single security rating standard. The security ratings may be received from a plurality of data sources, and may evaluate the security risk of data sources, servers, IoT devices, systems, networks, environments, other devices, etc. The determined security ratings may be increased or decreased based upon predetermined multipliers, [0035] FIG. 20, risk data may be received from a plurality of data sources. For example, risk data may be received from ACAS, STIG, SCAP, Fortify, POA&M, etc. As shown in table 2005, risk data from one or more risk assessment data sources may be normalized to determine a composite risk score and/or risk rating, [0061] FIG. 21 is a listing of formulas that may be used to determine risk levels and/or scores discussed herein. As discussed above, a base score may be determined at step 2405 that may be an aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score may also be determined at step 2410. For example, system security metrics may be averaged to determine an overall mitigated risk level and/or mitigated risk level score, [0062]) [Examiner interprets that system using structured scoring such as normalized inputs, multipliers/mitigators, formulas/steps to compute risk scores based on factors like mitigation environment, elapsed time, escalation thresholds and impacts is functionally similar to mapping (issue and context) to modified score]; wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores, (Green, may receive a plurality of security ratings from different data sources, which may be normalized to a single security rating standard. The security ratings may be received from a plurality of data sources, and may evaluate the security risk of data sources, servers, IoT devices, systems, networks, environments, other devices, etc., [0035] FIG. 20, risk data may be received from a plurality of data sources. For example, risk data may be received from ACAS, STIG, SCAP, Fortify, POA&M, etc. As shown in table 2005, risk data from one or more risk assessment data sources may be normalized to determine a composite risk score and/or risk rating, [0061]) [ Plurality of security issues (i.e., many vulnerabilities across many servers and multiple factors are used to compute scores)]; wherein the CSM is generated using a CSM model that is a computational framework that algorithmically assigns base-scores to security issues (Green, FIG. 21 is a listing of formulas that may be used to determine risk levels and/or scores discussed herein. As discussed above, a base score may be determined at step 2405 that may be an aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score may also be determined at step 2410. For example, system security metrics may be averaged to determine an overall mitigated risk level and/or mitigated risk level score, [0062] a server security vulnerability score may be determined, for each of the plurality of servers, based on the security data corresponding to the at least one security vulnerability for each of the plurality of servers, [0063]) [Examiner interprets that system assigning a base score/risk score based on vulnerability/ security data and normalized rating (i.e., algorithmic scoring framework)]; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information (Green, may detect and categorize security vulnerabilities, and modify the assessed risk level of data sources, servers, Internet of Things (IoT) devices, systems, etc., based on the categorization. The assessed risk level may further be modified over time based upon predetermined rules, such as the time since discovery, and mitigation steps taken, [0034] an overall system risk may be determined that may be based on the determined base level risk, mitigated risk level, escalation, and/or impact assessment. For example, one or more of these values may be averaged to determine the system level risk, [0062]) [Green supports base score, and multiple modifiers/factors contributing to a final risk score]; based on the security issue and the contextual information, determining, a base-score of the security issue and one or more contextual scores corresponding to the security issue and the contextual information (Green, Fig 2, Each data source 205 may be processed by the Active Engine 207, which may normalize the data, Each data source 205, may be analyzed to establish security risk status (i.e., the security issue). The risk level of each server in the cluster/hierarchy of servers 210 may be evaluated independently, Once a base level of risk (i.e., base score of security issue) is determined for servers 215, this base level may be modified based on a variety of factors (i.e., the contextual information), [0044] Fig 6, table 1205 (i.e., the CSM matrix) containing data feed vulnerabilities (i.e., the plurality of security issues) vs frequency in days and risk thresholds (i.e., the contextual information and scores) [0048] Authorized users may be able to configure the values shown in table 1505, such as number of non-remediated or unmitigated days to escalate the risk level from low to medium, medium to high, etc. The number of days to escalate the risk of a server to the system, application, and/or PMO or other higher organizational and/or network level, may also be set by the software and/or by the user, [0051] the security impact of elements in the electronic environment may also be assessed. The security impact may be evaluated independently of risk level, [0053]) [Examiner interprets that system computing a base risk level and separately applying factors like mitigation, elapsed time/thresholds and impact (i.e., contextual information) as limitation above]. based on the base-score of the security issue and the one or more contextual scores of the security issue, generating a risk score that quantifies the security exposure associated with the security issue (Green, Fig 21 a base score is determined at step 2405 that using aggregated normalized score, such as a summation of a plurality of normalized scores. …At step 2425, an overall system risk (i.e., the security exposure) is determined based on the determined base level risk (i.e., security issue base-scores of security issues) , mitigated risk level escalation, and/or impact assessment (i.e., contextual scores of instances of contextual information), [0062] Fig 22, At step 2910 a server security vulnerability score is determined, for each of the plurality of servers, based on the security data corresponding to the at least one security vulnerability for each of the plurality of servers (i.e., instances of contextual information). At step 2915 the server security vulnerability score can be modified, for each of the plurality of servers, based on a time elapsed since a discovery of at least one security vulnerability. At step 2920, a secured system security vulnerability score (i.e., a CSM-based risk score) is determined based on the server security vulnerability score for each of the plurality of servers, [0063] apply multipliers and mitigators to increment and decrement a meta risk score for servers, systems, IoT devices, environments, other devices, etc., associated with an organization. Multipliers may be increased based upon the discovery dates of vulnerabilities, the lifespan of vulnerabilities, and/or remediation types and dates, [0066]) [Examiner interprets that system generating overall risk score representing the quantified security risk of servers by combining determined base level risk (i.e., security issue base-scores of security issues), mitigated risk level escalation, and/or impact assessment (i.e., contextual scores of instances of contextual information) as limitation above]; based on the risk score, generating a security posture visualization associated with the computing environment, wherein the security posture visualization comprises the security issue associated with the risk score (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. The system displays risk or other information associated with PEO 1510, [0058], the low-risk designation for PEO 1 at 1510 is based upon the average score of the servers under PEO 1. Higher level risk designations are averages of lower servers, or alternatively the median risk level of lower servers, the highest server risk level of any of the lower servers, the mode of the server risk level (most frequently occurring), etc., [0059] The display 1500 with a series of concentric rings, where each concentric ring level represents a hierarchical level of the organization, with the highest selected level being displayed in the center or innermost ring. It automatically gets updated based upon determined risk levels. For example, the color, pattern, size, and/or visual indicator assigned to the displayed servers, systems, higher level organizational elements, etc., are based on the associated risk level. Such as color-coding low risk - color green, high risk - color red, apportioned different sizes such as higher risk elements – larger hierarchical ring, lower risk elements – smaller hierarchical ring, [0060]) [Examiner interprets that displaying relevant risk levels related to different security vulnerability of different servers calculated based on vulnerability scores as based on the CSM-based risk score, generating a security posture visualization associated with the computing environment, wherein the security posture visualization comprises the security issue associated with the CSM-based risk score]; and communicating the security posture visualization to cause display of the security posture visualization (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. A user can select a particular PEO level 1510, which causes the system to display risk or other information associated with PEO 1510, [0058]) [Examiner interprets that the user selecting to display risk posture or other information as communicating the security posture visualization to cause display of the security posture visualization]. Although, Green teaches structured scoring such as normalized inputs, multipliers/mitigators, formulas/steps to compute risk scores based on factors like mitigation environment, elapsed time, escalation thresholds and impacts is functionally similar to mapping (issue and context) to modified score, Plurality of security issues (i.e., many vulnerabilities across many servers and multiple factors are used to compute scores), base score, and multiple modifiers/factors contributing to a final risk score, and the logical representation of relationships between security issues, contextual information, and associated risk scores, Green does not explicitly teach an explicit matrix data structure: accessing a contextual security matrix (CSM) that is a structured data representation that maps a first security issue to a first instance of the contextual information having a first contextual score ; wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores; wherein the CSM is generated using a CSM model; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information; CSM based risk score However, Shubhabrata teaches: accessing a contextual security matrix (CSM) that is a structured data representation that maps a first security issue to a first instance of the contextual information having a first contextual score (Shubhabrata, system and method employ an algorithm that correlates vulnerabilities with contextual information such as threat data and virtualization tags (e.g., as provided in the virtualization environment by a vendor such as VMware, etc.). The algorithm works on a three-dimensional (or three axis) model in some embodiments. The three dimensions are summarized below: Dimension#1—Vulnerability (e.g., as reported by vulnerability assessment products). Related data could include base/temporal CVSS score, common vulnerabilities and exposures identifier (CVE ID), severity, etc. Dimension#2—Threat (e.g., threats received from Threat Intelligence systems such as DeepSight). Related data could include threat impact, impacted CVE ID, type of threat, operating system impacted, applications impacted, etc. Dimension#3—Workload Context: Tags (e.g., Operational Tags as well as Security Tags, i.e., static tags and dynamic tags, as defined in a virtualization environment using VMware, etc.) (See Col 3, lines 43-67) A vulnerability score 212, a threat score 214, and a contextual score 216 are combined to form a prioritization score 218., (see, col 6, lines 1-3) The contextual module 310 tracks the workload context 102 of each of the assets 224, for example by establishing a data structure in memory and populating the data structure with information derived from the tags 202…. For each asset 224, and for each vulnerability considered by the vulnerability module 306, the contextual module 310 generates a contextual score 216. To generate the contextual score 216, the contextual module 310 correlates aspects of the specified vulnerability, e.g., from vulnerability data 210, and aspects of the asset 224, e.g., from metadata in tags 202. In some embodiments, for a specified asset 224 and a vulnerability specified by a CVE ID 220, the contextual module 310 determines whether the vulnerability matches the asset 224, (see, Col 8, lines 19-40)) [In light of specification CSM defines how security issue is affected by each contextual information and operates as data structure, see instant application at par [0026,0061, 0032], Examiner interprets that Vulnerability as security issue, Tags/workload context as contextual information, generated contextual score as contextual score, a specific tag instance such as INTERNET FACING, CRITICAL DATA as first instance of contextual information, contextual score 216 for that vulnerability asset context correlation as first contextual score, and data structure established and populated in memory by contextual module (e .g., tables, linked records or multidimensional data representations corelating vulnerabilities assets and contextual attributes) as contextual security matrix (CSM) + 3 axis model as CSM model]. wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores (Shubhabrata, When vulnerability data 210 is received by the computing device 302, the vulnerability module 306 tracks the association of CVE ID 220, CVSS score 222, severity and exploitability information (when available) for each such vulnerability or exposure, for each asset 224, so that these are correlated. For example, various associated entries could have links in a database, be on the same row or column in a table, or be listed sequentially in a file, etc. The vulnerability module 306 produces a vulnerability score 212, for each vulnerability for each asset 224, which could be the base CVSS score 222 or the temporal CVSS score 222 or a combination, (See col 7, lines 18-29) static tags such as CRITICAL DATA, WEB, INTERNET FACING, ADOBE_APP, INTERNET_EXPLORER_APP, etc. and dynamic tags such as such as VIRUS FOUND, INTRUSION DETECTED, etc., (see col 5, lines 34-58), The contextual module 310 tracks the workload context 102 of each of the assets 224, for example by establishing a data structure in memory and populating the data structure with information derived from the tags 202. Static tag information 204 and dynamic tag information 206 may be included in the workload context 102…For each asset 224, and for each vulnerability considered by the vulnerability module 306, the contextual module 310 generates a contextual score 216. To generate the contextual score 216, the contextual module 310 correlates aspects of the specified vulnerability, e.g., from vulnerability data 210, and aspects of the asset 224, e.g., from metadata in tags 202, (see col 8, lines 19-34)) Examiner interprets that system disclosing multiple vulnerabilities ( i.e., plurality of security issues) , multiple tags (i.e., a plurality of instances of the contextual information) and multiple contextual scores based on asset (i.e., a plurality of contextual scores) and storing in the data structure as limitation above]; wherein the CSM is generated using a CSM model (Shubhabrata, system and method employ an algorithm that correlates vulnerabilities with contextual information such as threat data and virtualization tags (e.g., as provided in the virtualization environment by a vendor such as VMware, etc.). The algorithm works on a three-dimensional (or three axis) model in some embodiments. The three dimensions are summarized below: Dimension#1—Vulnerability (e.g., as reported by vulnerability assessment products). Related data could include base/temporal CVSS score, common vulnerabilities and exposures identifier (CVE ID), severity, etc. Dimension#2—Threat (e.g., threats received from Threat Intelligence systems such as DeepSight). Related data could include threat impact, impacted CVE ID, type of threat, operating system impacted, applications impacted, etc. Dimension#3—Workload Context: Tags (e.g., Operational Tags as well as Security Tags, i.e., static tags and dynamic tags, as defined in a virtualization environment using VMware, etc.) (See Col 3, lines 43-67) The contextual module 310 tracks the workload context 102 of each of the assets 224, for example by establishing a data structure in memory and populating the data structure with information derived from the tags 202…. For each asset 224, and for each vulnerability considered by the vulnerability module 306, the contextual module 310 generates a contextual score 216. To generate the contextual score 216, the contextual module 310 correlates aspects of the specified vulnerability, e.g., from vulnerability data 210, and aspects of the asset 224, e.g., from metadata in tags 202. In some embodiments, for a specified asset 224 and a vulnerability specified by a CVE ID 220, the contextual module 310 determines whether the vulnerability matches the asset 224, (see, Col 8, lines 19-40)) [In light of specification, CSM model requires a model that defines scoring relationships, examiner interprets that 3 axis correlation algorithm defining the relationship between base scores, vulnerabilities and their workload context or contextual information as the CSM is generated using a CSM model]. wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information (Shubhabrata, The prioritization module 312 of FIG. 3 cooperates with the vulnerability module 306, the threat module 308 and the contextual module 310, to produce the prioritization score 218 from the vulnerability score 212, the threat score 214 and the contextual score 216. In one embodiment, the prioritization module 312 multiplies the vulnerability score 212, for a particular asset 224 and a particular vulnerability (e.g., as identified by a CVE ID 220), the threat score 214, for the asset 224 and the particular vulnerability, and the contextual score 216, for the asset 224 and the particular vulnerability. This result can then be scaled, e.g., by dividing by a predetermined number, to produce the prioritization score 218. Various scales are readily devised for each of the scores 212, 214, 216, 218, as are various scaling factors. The prioritization score 218 thus represents a relative numbering or ranking of priority of a specific vulnerability of a specific asset 224, relative to other vulnerabilities and/or other assets 224… A relatively high prioritization score 218 suggests the vulnerability in the asset 224 should be addressed by remediation, (see col 8, lines 60-67, and col 9, lines 1-12) The vulnerability score could include, or be based on a base or temporal CVSS score, or both. This could be accompanied by a CVE ID, identifying a particular vulnerability in the asset for which the CVSS score is determined, (see col 9, lines 35-39), static tags such as CRITICAL DATA, WEB, INTERNET FACING, ADOBE_APP, INTERNET_EXPLORER_APP, etc. and dynamic tags such as such as VIRUS FOUND, INTRUSION DETECTED, etc., (see col 5, lines 34-58)) [In light of specification, CSM based score is a score quantifying risk and exposure, system disclosing vulnerability score (i.e., base score), contextual score and prioritization score (i.e., CSM based scores) in a single structured framework as limitation above]. CSM based risk score (Shubhabrata, A vulnerability score 212, a threat score 214, and a contextual score 216 are combined to form a prioritization score 218. The prioritization score 218 can be applied to indicate the impact or severity of vulnerability, so that vulnerabilities can be prioritized as to which ones need attention or remediation, (see col 6, lines 1-6), A relatively high prioritization score 218 suggests the vulnerability in the asset 224 should be addressed by remediation. Vulnerability data 210 and/or threat information 208 can be consulted to guide the remediation effort, (see, col 9, lines 10-14)) [In light of specification, CSM based score quantifies security exposure (exploitability and impact), see instant application [0006], [0032], the prioritization score indicating the impact or severity as CSM based score]. Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Green to include a concept of accessing a contextual security matrix (CSM) that is a structured data representation that maps a first security issue to a first instance of the contextual information having a first contextual score ; wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores; wherein the CSM is generated using a CSM model; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information; CSM based risk score as taught by Shubhabrata for the purpose of combining vulnerability score 212, and a contextual score 216 to form a prioritization score 218 which can be applied to indicate the impact or severity of vulnerability, so that vulnerabilities can be prioritized as to which ones need attention or remediation, [Shubhabrata: (col 6, lines 1-6)]. Regarding claim 2, Green and Shubhabrata teaches the system of claim 1, wherein a contextual security matrix model generator is, associated with a security risk management engine, supports generating the CSM model associated with security issue data, the contextual information, and recommended remediation action data of the CSM (Green, FIG. 2, data sources 205 comprises one or more servers connected to an electronic network, of an organization. The data sources 205 provides data derived by the analysis of security information 205 at the server level for each server, for example, by IP address. Each data source 205 is processed by the Active Engine 207 (i.e., a security risk management engine or a CSM model) to normalize the data then gets analyzed to establish security risk status. The risk level of each server in the cluster/hierarchy of servers 210 is evaluated independently. Once a base level of risk is determined for servers 215, this base level are modified based on a variety of factors, [0044] Fig 21 a base score is determined at step 2405 that using aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score is determined at step 2410 by averaging system security metrics to determine an overall mitigated risk level and/or mitigated risk level score. Confidentiality, integrity, and/or availability scores/multipliers is used to determine the mitigated risk level. At step 2415, the risk level can be escalated by one or more authorized users by setting one or more threshold frequencies, At step 2420, an impact assessment is determined, which determines remediation priority. At step 2425, an overall system risk (i.e., the security exposure) is determined based on the determined base level risk (i.e., security issue base-scores of security issues) , mitigated risk level (i.e., CSM-based risk scores) , escalation, and/or impact assessment (i.e., contextual scores of instances of contextual information). For example, one or more of these values are averaged to determine the system level risk, [0062]) [Examiner interprets that active engine analyzing data sources that provides contextual data (i.e., server specific details) to establish risk status and system further adjusting and refining the risk level by aggregating scores and incorporating contextual and mitigation data as limitation above]. Green does not explicitly teach: a contextual security matrix model generator generating the CSM model However, Shubhabrata teaches: a contextual security matrix model generator generating the CSM model (Shubhabrata, system and method employ an algorithm that correlates vulnerabilities with contextual information such as threat data and virtualization tags (e.g., as provided in the virtualization environment by a vendor such as VMware, etc.). The algorithm works on a three-dimensional (or three axis) model in some embodiments. The three dimensions are summarized below: Dimension#1—Vulnerability (e.g., as reported by vulnerability assessment products). Related data could include base/temporal CVSS score, common vulnerabilities and exposures identifier (CVE ID), severity, etc. Dimension#2—Threat (e.g., threats received from Threat Intelligence systems such as DeepSight). Related data could include threat impact, impacted CVE ID, type of threat, operating system impacted, applications impacted, etc. Dimension#3—Workload Context: Tags (e.g., Operational Tags as well as Security Tags, i.e., static tags and dynamic tags, as defined in a virtualization environment using VMware, etc.) (See Col 3, lines 43-67) The contextual module 310 tracks the workload context 102 of each of the assets 224, for example by establishing a data structure in memory and populating the data structure with information derived from the tags 202…. For each asset 224, and for each vulnerability considered by the vulnerability module 306, the contextual module 310 generates a contextual score 216. To generate the contextual score 216, the contextual module 310 correlates aspects of the specified vulnerability, e.g., from vulnerability data 210, and aspects of the asset 224, e.g., from metadata in tags 202. In some embodiments, for a specified asset 224 and a vulnerability specified by a CVE ID 220, the contextual module 310 determines whether the vulnerability matches the asset 224, (see, Col 8, lines 19-40)) [In light of specification, CSM model requires a model that defines scoring relationships, examiner interprets that 3 axis correlation algorithm defining the relationship between base scores, vulnerabilities and their workload context or contextual information as a contextual security matrix model generator generating the CSM model] Same motivation applies as claim 1. Regarding claim 3, Green and Shubhabrata teaches the system of claim 1, wherein the CSM is a scored representation of how each of the plurality of security issues is affected by corresponding contextual information (Green, FIG. 2, Each data source 205 is processed by the Active Engine 207 (i.e., a CSM model) to normalize the data then gets analyzed to establish security risk status (i.e., a plurality of security issues). The risk level of each server in the cluster/hierarchy of servers 210 is evaluated independently. Once a base level of risk is determined for servers 215, this base level are modified based on a variety of factors, [0044] Fig 6, table 1205 (i.e., the CSM matrix) containing data feed vulnerabilities (i.e., the plurality of security issues) vs frequency in days and risk thresholds (i.e., the contextual information and scores) [0048] in Fig 20, in table 2005, (i.e., the CSM matrix) tabulate normalized score from multiple data sources into composite risk, [0061] Fig 21 a base score is determined at step 2405 that using aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score is determined at step 2410 by averaging system security metrics to determine an overall mitigated risk level and/or mitigated risk level score. Confidentiality, integrity, and/or availability scores/multipliers is used to determine the mitigated risk level. At step 2415, the risk level can be escalated by one or more authorized users by setting one or more threshold frequencies, At step 2420, an impact assessment is determined, which determines remediation priority. At step 2425, an overall system risk (i.e., the security exposure) is determined based on the determined base level risk (i.e., security issue base-scores of security issues) , mitigated risk level, escalation, and/or impact assessment (i.e., contextual scores of instances of contextual information). For example, one or more of these values are averaged to determine the system level risk, [0062]) [Examiner interprets that active engine (i.e., a CSM model) analyzing data sources that provides security issues and contextual data (i.e., server specific details) to establish risk status and system further adjusting and refining the risk level by aggregating scores based on contextual data from data sources (i.e. multiple servers) and configuring those scores in the tabular format (i.e., CSM matrix) form as wherein the CSM is a scored representation of how each of the plurality of security issues is affected by corresponding contextual information]. Regarding claim 4, Green and Shubhabrata teaches the system of claim 1, wherein the CSM further comprises one or more recommended remediation actions that are mapped to the security issue, wherein a recommended remediation action is an actionable item that is performed to mitigate the security issue in the computing environment (Green, Fig 3, mitigation steps (i.e., a remediation action) is taken based on risks and impacts that have been identified. Mitigation is applied at the server level since it is a result of the environment where the server resides and the protection system installed on the server. The mitigation environment reflect varying levels of network, server, and data protection. For example, a high-risk environment 905 is protected by a few risk mitigating components such as white lists or a demilitarized zone/perimeter network. A server obtain a medium mitigated risk by being associated with a medium-risk environment 910, which can add additional mitigating components such as putting the server behind a first firewall, exposing it to antivirus protection, etc. These support and protection structures are layered to protect the server and/or system and mitigate the base risk. For example, all protections of the high-risk environment 905 automatically be present in the medium and low-risk environments 910 and 915. Mitigation data is inputted by an authorized user(s),[0045] the remediation priority algorithm is configurable by a user or organization. For example, a system with a medium-risk level but high security impact can be automatically assigned a higher remediation priority than a system with a high security risk level but a low security impact. Thus, remediation priorities 705 are set to automatically, and categorically, prioritize higher security impact systems, or set to prioritize higher risk level systems, depending upon user and/or organizational requirements, [0054]) [Examiner interprets that system or user assigning different remediation priorities to mitigate different vulnerabilities based on their risk levels (i.e. security issues) as the CSM further comprises one or more recommended remediation actions that are mapped to the security issue, wherein a recommended remediation action is an actionable item that is performed to mitigate the security issue in the computing environment]. Regarding claim 5, Green and Shubhabrata teaches the system of claim 1, wherein the base-score of the security issue is a predefined score of the security issue and a contextual score of an instance of the contextual information is a quantified additional security exposure of the security issue based on the instance of the contextual information, wherein the quantified additional security exposure is associated with a potential impact or a potential exploitability (Green, FIG. 2, data sources 205 comprises one or more servers connected to an electronic network, of an organization which provides data derived by the analysis of security information 205 at the server level for each server, for example, by IP address. Each data source 205 is processed by the Active Engine 207 (i.e., a security risk management engine) to normalize the data. Each data source 205, for example, Assured Compliance Assessment Solution (ACAS) scan results and gets analyzed to establish security risk status (i.e., a plurality of security issues). The risk level of each server in the cluster/hierarchy of servers 210 is evaluated independently. Once a base level of risk is determined for servers 215, this base level can be modified based on a variety of factors, [0044] mitigation steps (i.e., a remediation action) is taken based on risks and impacts that have been identified. Mitigation is applied at the server level since it is a result of the environment where the server resides and the protection system installed on the server. The mitigation environment reflect varying levels of network, server, and data protection. For example, a high-risk environment 905 is protected by a few risk mitigating components such as white lists or a demilitarized zone/perimeter network. A server obtain a medium mitigated risk by being associated with a medium-risk environment 910, which can add additional mitigating components such as putting the server behind a first firewall, exposing it to antivirus protection, etc. These support and protection structures are layered to protect the server and/or system and mitigate the base risk. For example, all protections of the high-risk environment 905 automatically be present in the medium and low-risk environments 910 and 915. Mitigation data is inputted by an authorized users. By adding additional predetermined mitigating measures, a server's security risk is improved substantially by environmental protections,[0045] Fig 21 a base score is determined at step 2405 that using aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score is determined at step 2410 by averaging system security metrics to determine an overall mitigated risk level and/or mitigated risk level score. Confidentiality, integrity, and/or availability scores/multipliers is used to determine the mitigated risk level. At step 2415, the risk level can be escalated by one or more authorized users by setting one or more threshold frequencies, At step 2420, an impact assessment is determined, which determines remediation priority. At step 2425, an overall system risk (i.e., the security exposure) is determined based on the determined base level risk (i.e., security issue base-scores of security issues) , mitigated risk level (i.e., CSM-based risk scores) , escalation, and/or impact assessment (i.e., contextual scores of instances of contextual information). For example, one or more of these values are averaged to determine the system level risk, [0062]) [Examiner interprets that receiving security data corresponding to security vulnerability from scanning tools (e.g., ACAS) and analyzing the normalized data to base score of each servers (i.e., predefined vulnerability severity) and multiplying mitigation risk level, escalation, and/or impact assessment (i.e. a contextual score of an instance of contextual information) and adjusting and modifying the risk levels by applying better mitigation steps in order to decrease the initial vulnerability rating for better protections as limitation above]. Regarding claim 6, Green and Shubhabrata teaches the system of claim 1, wherein the security issue is associated with the first instance of the contextual information and a second instance of the contextual information, the first instance of the contextual information is associated with the first contextual score and the second instance of the contextual information is associated with a second contextual score (Green, FIG. 4, an individual server 1005 that is high risk receives active mitigation in a low-risk environment 915 and can be downgraded to medium risk 1010. Risk levels of servers can also upgrade. For example, a medium-risk server 1025 may be moved to a high-risk environment 905, can be upgraded to a high-risk server 1030 (i.e., a first instance of contextual information), [0046] Fig 22, At step 2905, on an electronic network, security data is received corresponding to at least one security vulnerability (i.e., the security issues) associated with each of a plurality of servers, each of the plurality of servers being associated with a secured system. At step 2910 a server security vulnerability score may be determined, for each of the plurality of servers, based on the security data corresponding to the at least one security vulnerability for each of the plurality of servers. At step 2915 the server security vulnerability score can be modified, for each of the plurality of servers, based on a time elapsed since a discovery of at least one security vulnerability ( i.e., a second instance of contextual information), [0063]) [Examiner interprets that each vulnerability of different servers are dependent on multiple contextual factors such as (i.e., time, environment, mitigation steps) simultaneously each contributing an independent contextual scores that modifies the base security issue score as limitation above]. Regarding claim 7, Green and Shubhabrata teaches the system of claim 1, wherein the CSM-based risk score is generated based on a sum of the base-score of the security issue and a contextual score of one or more instances of the contextual information of the security issue (Green, Fig 21 a base score is determined at step 2405 that using aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score is determined at step 2410 by averaging system security metrics to determine an overall mitigated risk level and/or mitigated risk level score. Confidentiality, integrity, and/or availability scores/multipliers is used to determine the mitigated risk level. At step 2415, the risk level can be escalated by one or more authorized users by setting one or more threshold frequencies, At step 2420, an impact assessment is determined, which determines remediation priority. At step 2425, an overall system risk (i.e., the CSM-based risk score) is determined based on the determined base level risk (i.e., security issue base-scores of security issues) , mitigated risk level, escalation, and/or impact assessment (i.e., a contextual score of one or more instances of contextual information of the security issue). For example, one or more of these values are averaged to determine the system level risk, [0062] Techniques presented applies multipliers and mitigators to increment and decrement a meta risk score for servers, systems, IoT devices, environments, other devices, etc., associated with an organization. Multipliers may be increased based upon the discovery dates of vulnerabilities, the lifespan of vulnerabilities, and/or remediation types and dates, [0066]) [Examiner interprets that calculating overall risk of system based on base level risk (i.e., security issue base-scores of security issues) and mitigated risk level, escalation, and/or impact assessment collectively (i.e., a contextual score of one or more instances of contextual information of the security issue) using multipliers and mitigators to increase or decrease the risk levels as limitation above]. Regarding claim 8, Green and Shubhabrata teaches the system of claim 1, wherein a security posture management engine supports generating the security posture visualization comprising the plurality of security issues, wherein the plurality of security issues are associated with corresponding CSM-based risk scores and the contextual information, wherein the security posture visualization comprises each of the plurality of security issues as alerts, wherein an alert comprises a prioritization identifier and a recommended remediation action, wherein the recommended remediation action is executable to address a security threat associated with the alert (Green, the remediation priority algorithm is configurable by a user or organization. For example, a system with a medium-risk level but high security impact can be automatically assigned a higher remediation priority than a system with a high security risk level but a low security impact. Thus, remediation priorities 705 are set to automatically, and categorically, prioritize higher security impact systems, or set to prioritize higher risk level systems, depending upon user and/or organizational requirements, [0054] FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. For example, a user selects a particular PEO level 1510, which cause the system to display risk or other information associated with PEO 1510. [0058], the low-risk designation for PEO 1 at 1510 is based upon the average score of the servers under PEO 1. Higher level risk designations are averages of lower servers, or alternatively the median risk level of lower servers, the highest server risk level of any of the lower servers, the mode of the server risk level (most frequently occurring), etc., [0059] The display 1500 with a series of concentric rings, where each concentric ring level represents a hierarchical level of the organization, with the highest selected level being displayed in the center or innermost ring. It automatically gets updated based upon determined risk levels. For example, the color, pattern, size, and/or visual indicator assigned to the displayed servers, systems, higher level organizational elements, etc., are based on the associated risk level. Such as color-coding low risk - color green, high risk - color red, apportioned different sizes such as higher risk elements – larger hierarchical ring, lower risk elements – smaller hierarchical ring, [0060]) [Examiner interprets that displaying risk posture or other information such as relevant risk levels related to different security vulnerability of different servers calculated based on vulnerability scores and color-coding based on associated risk levels (i.e., alerts) and setting up remediation priority based on risk levels to automatically, and categorically, prioritize higher security impact systems as limitation above]. Regarding claim 9, Green and Shubhabrata teaches the system of claim 1, the operations further comprising: communicating, from a security management client, a request for a security posture of the computing environment; based on the request, receiving the security posture visualization associated with the computing environment, wherein the security posture visualization comprises an alert associated with the computing device, the security issue, and an instance of the contextual information associated with the security issue; and causing display of the security posture visualization (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. For example, a user selects a particular PEO level 1510, which cause the system to display risk or other information associated with PEO 1510. [0058], the low-risk designation for PEO 1 at 1510 is based upon the average score of the servers under PEO 1. Higher level risk designations are averages of lower servers, or alternatively the median risk level of lower servers, the highest server risk level of any of the lower servers, the mode of the server risk level (most frequently occurring), etc., [0059] The display 1500 with a series of concentric rings, where each concentric ring level represents a hierarchical level of the organization, with the highest selected level being displayed in the center or innermost ring. It automatically gets updated based upon determined risk levels. For example, the color, pattern, size, and/or visual indicator assigned to the displayed servers, systems, higher level organizational elements, etc., are based on the associated risk level. Such as color-coding low risk - color green, high risk - color red, apportioned different sizes such as higher risk elements – larger hierarchical ring, lower risk elements – smaller hierarchical ring, [0060]) [Examiner interprets that user selecting to display risk posture or other information such as relevant risk levels related to different security vulnerability of different servers calculated based on vulnerability scores as communicating, from a security management client, a request for a security posture of the computing environment; based on the request, receiving the security posture visualization associated with the computing environment, wherein the security posture visualization comprises an alert associated with the computing device, the security issue, and an instance of contextual information associated with security issue; and causing display of the security posture visualization]. Regarding claim 10, Green and Shubhabrata teaches the system of claim 1, the operations further comprising: receiving an indication to execute a recommended remediation action associated with the security issue, wherein the recommended remediation action is associated with the security posture visualization; and communicating the indication to execute the recommended remediation action to cause execution of the recommended remediation action (Green, Fig 3, mitigation steps (i.e., a remediation action) is taken based on risks and impacts that have been identified. Mitigation is applied at the server level since it is a result of the environment where the server resides and the protection system installed on the server. The mitigation environment reflect varying levels of network, server, and data protection. For example, a high-risk environment 905 is protected by a few risk mitigating components such as white lists or a demilitarized zone/perimeter network. A server obtain a medium mitigated risk by being associated with a medium-risk environment 910, which can add additional mitigating components such as putting the server behind a first firewall, exposing it to antivirus protection, etc. These support and protection structures are layered to protect the server and/or system and mitigate the base risk. For example, all protections of the high-risk environment 905 automatically be present in the medium and low-risk environments 910 and 915. Mitigation data is inputted by an authorized user(s),[0045] the remediation priority algorithm is configurable by a user or organization. For example, a system with a medium-risk level but high security impact can be automatically assigned a higher remediation priority than a system with a high security risk level but a low security impact. Thus, remediation priorities 705 are set to automatically, and categorically, prioritize higher security impact systems, or set to prioritize higher risk level systems, depending upon user and/or organizational requirements, [0054] a user selects a particular PEO level 1510, which cause the system to display risk or other information associated with PEO 1510, [0058]) [Examiner interprets that user clicking the button to execute different remediation priorities via security posture visualization and mitigate different vulnerabilities by applying different changes on the displayed visualization posture as limitation above]. Regarding claim 11, Green and Shubhabrata teaches one or more computer-storage media having computer-executable instructions embodied thereon that, when executed by a computing system having a processor and memory, cause the processor to perform operations, the operations (Green, a non-transitory computer readable medium is disclosed comprising instructions for determining a secured system security risk score. The instructions may cause the system to execute a method, [0006] one or more processors, for executing program instructions, [0071]) comprising: communicating a request for a security posture of a computing environment (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. A user can select a particular PEO level 1510, which causes the system to display risk or other information associated with PEO 1510, [0058]) [Examiner interprets that the user selecting to display risk posture or other information as communicating a request for a security posture of a computing environment]; based on the request, receiving a security posture visualization associated with the computing environment, wherein the security posture visualization comprises a plurality of security issues having corresponding contextual security matrix (CSM)-based risk scores, wherein a CSM-based risk score quantifies a security exposure associated with a security issue (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. For example, a user selects a particular PEO level 1510, which cause the system to display risk or other information associated with PEO 1510. [0058], the low-risk designation for PEO 1 at 1510 is based upon the average score of the servers under PEO 1. Higher level risk designations are averages of lower servers, or alternatively the median risk level of lower servers, the highest server risk level of any of the lower servers, the mode of the server risk level (most frequently occurring), etc., [0059] The display 1500 with a series of concentric rings, where each concentric ring level represents a hierarchical level of the organization, with the highest selected level being displayed in the center or innermost ring. It automatically gets updated based upon determined risk levels. For example, the color, pattern, size, and/or visual indicator assigned to the displayed servers, systems, higher level organizational elements, etc., are based on the associated risk level. Such as color-coding low risk - color green, high risk - color red, apportioned different sizes such as higher risk elements – larger hierarchical ring, lower risk elements – smaller hierarchical ring, [0060] Fig 21 a base score is determined at step 2405 that using aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score is determined at step 2410 by averaging system security metrics to determine an overall mitigated risk level and/or mitigated risk level score. Confidentiality, integrity, and/or availability scores/multipliers is used to determine the mitigated risk level. At step 2415, the risk level can be escalated by one or more authorized users by setting one or more threshold frequencies, as discussed elsewhere herein. At step 2420, an impact assessment is determined, which determines remediation priority. At step 2425, an overall system risk (i.e., the security exposure) is determined based on the determined base level risk (i.e., security issue base-scores of security issues) , mitigated risk level (i.e., CSM-based risk scores) , escalation, and/or impact assessment (i.e., contextual scores of instances of contextual information). For example, one or more of these values are averaged to determine the system level risk, [0062]) [Examiner interprets that user selecting to display risk posture or other information such as relevant risk levels related to different security vulnerability of different servers calculated based on vulnerability scores (i.e., CSM based scores) as limitation above]. wherein the CSM comprises the plurality of security issues, a plurality of instances of contextual information, and a plurality of contextual scores (Green, may receive a plurality of security ratings from different data sources, which may be normalized to a single security rating standard. The security ratings may be received from a plurality of data sources, and may evaluate the security risk of data sources, servers, IoT devices, systems, networks, environments, other devices, etc., [0035] FIG. 20, risk data may be received from a plurality of data sources. For example, risk data may be received from ACAS, STIG, SCAP, Fortify, POA&M, etc. As shown in table 2005, risk data from one or more risk assessment data sources may be normalized to determine a composite risk score and/or risk rating, [0061]) [ Plurality of security issues (i.e., many vulnerabilities across many servers and multiple factors are used to compute scores)] wherein the CSM is generated using a CSM model that is a computational framework that algorithmically assigns base-scores to security issues, (Green, FIG. 21 is a listing of formulas that may be used to determine risk levels and/or scores discussed herein. As discussed above, a base score may be determined at step 2405 that may be an aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score may also be determined at step 2410. For example, system security metrics may be averaged to determine an overall mitigated risk level and/or mitigated risk level score, [0062] a server security vulnerability score may be determined, for each of the plurality of servers, based on the security data corresponding to the at least one security vulnerability for each of the plurality of servers, [0063]) [Examiner interprets that system assigning a base score/risk score based on vulnerability/ security data and normalized rating (i.e., algorithmic scoring framework)] wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information (Green, may detect and categorize security vulnerabilities, and modify the assessed risk level of data sources, servers, Internet of Things (IoT) devices, systems, etc., based on the categorization. The assessed risk level may further be modified over time based upon predetermined rules, such as the time since discovery, and mitigation steps taken, [0034] an overall system risk may be determined that may be based on the determined base level risk, mitigated risk level, escalation, and/or impact assessment. For example, one or more of these values may be averaged to determine the system level risk, [0062]) [Green supports base score, and multiple modifiers/factors contributing to a final risk score]; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information (Green, may detect and categorize security vulnerabilities, and modify the assessed risk level of data sources, servers, Internet of Things (IoT) devices, systems, etc., based on the categorization. The assessed risk level may further be modified over time based upon predetermined rules, such as the time since discovery, and mitigation steps taken, [0034] an overall system risk may be determined that may be based on the determined base level risk, mitigated risk level, escalation, and/or impact assessment. For example, one or more of these values may be averaged to determine the system level risk, [0062]) [Green supports base score, and multiple modifiers/factors contributing to a final risk score]; and causing display of the security posture visualization (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. For example, a user selects a particular PEO level 1510, which cause the system to display risk or other information associated with PEO 1510. [0058], the low-risk designation for PEO 1 at 1510 is based upon the average score of the servers under PEO 1. Higher level risk designations are averages of lower servers, or alternatively the median risk level of lower servers, the highest server risk level of any of the lower servers, the mode of the server risk level (most frequently occurring), etc., [0059] The display 1500 with a series of concentric rings, where each concentric ring level represents a hierarchical level of the organization, with the highest selected level being displayed in the center or innermost ring. It automatically gets updated based upon determined risk levels. For example, the color, pattern, size, and/or visual indicator assigned to the displayed servers, systems, higher level organizational elements, etc., are based on the associated risk level. Such as color-coding low risk - color green, high risk - color red, apportioned different sizes such as higher risk elements – larger hierarchical ring, lower risk elements – smaller hierarchical ring, [0060]) [Examiner interprets that user selecting to display risk posture or other information such as relevant risk levels related to different security vulnerability of different servers calculated based on vulnerability scores as causing display of the security posture visualization]. Although, Green teaches structured scoring such as normalized inputs, multipliers/mitigators, formulas/steps to compute risk scores based on factors like mitigation environment, elapsed time, escalation thresholds and impacts is functionally similar to mapping (issue and context) to modified score, Plurality of security issues (i.e., many vulnerabilities across many servers and multiple factors are used to compute scores), base score, and multiple modifiers/factors contributing to a final risk score, and the logical representation of relationships between security issues, contextual information, and associated risk scores, Green does not explicitly teach an explicit matrix data structure: wherein the CSM-based risk score is generated using a CSM, wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information; Shubhabrata does not explicitly teach: wherein the CSM-based risk score is generated using a CSM (Shubhabrata, A vulnerability score 212, a threat score 214, and a contextual score 216 are combined to form a prioritization score 218. The prioritization score 218 can be applied to indicate the impact or severity of vulnerability, so that vulnerabilities can be prioritized as to which ones need attention or remediation, (see col 6, lines 1-6), A relatively high prioritization score 218 suggests the vulnerability in the asset 224 should be addressed by remediation. Vulnerability data 210 and/or threat information 208 can be consulted to guide the remediation effort, (see, col 9, lines 10-14)) [In light of specification, CSM based score quantifies security exposure (exploitability and impact), see instant application [0006], [0032], the prioritization score indicating the impact or severity as CSM based score]. wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores (Shubhabrata, When vulnerability data 210 is received by the computing device 302, the vulnerability module 306 tracks the association of CVE ID 220, CVSS score 222, severity and exploitability information (when available) for each such vulnerability or exposure, for each asset 224, so that these are correlated. For example, various associated entries could have links in a database, be on the same row or column in a table, or be listed sequentially in a file, etc. The vulnerability module 306 produces a vulnerability score 212, for each vulnerability for each asset 224, which could be the base CVSS score 222 or the temporal CVSS score 222 or a combination, (See col 7, lines 18-29) static tags such as CRITICAL DATA, WEB, INTERNET FACING, ADOBE_APP, INTERNET_EXPLORER_APP, etc. and dynamic tags such as such as VIRUS FOUND, INTRUSION DETECTED, etc., (see col 5, lines 34-58), The contextual module 310 tracks the workload context 102 of each of the assets 224, for example by establishing a data structure in memory and populating the data structure with information derived from the tags 202. Static tag information 204 and dynamic tag information 206 may be included in the workload context 102…For each asset 224, and for each vulnerability considered by the vulnerability module 306, the contextual module 310 generates a contextual score 216. To generate the contextual score 216, the contextual module 310 correlates aspects of the specified vulnerability, e.g., from vulnerability data 210, and aspects of the asset 224, e.g., from metadata in tags 202, (see col 8, lines 19-34)) Examiner interprets that system disclosing multiple vulnerabilities ( i.e., plurality of security issues) , multiple tags (i.e., a plurality of instances of the contextual information) and multiple contextual scores based on asset (i.e., a plurality of contextual scores) and storing in the data structure as limitation above]; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information (Shubhabrata, The prioritization module 312 of FIG. 3 cooperates with the vulnerability module 306, the threat module 308 and the contextual module 310, to produce the prioritization score 218 from the vulnerability score 212, the threat score 214 and the contextual score 216. In one embodiment, the prioritization module 312 multiplies the vulnerability score 212, for a particular asset 224 and a particular vulnerability (e.g., as identified by a CVE ID 220), the threat score 214, for the asset 224 and the particular vulnerability, and the contextual score 216, for the asset 224 and the particular vulnerability. This result can then be scaled, e.g., by dividing by a predetermined number, to produce the prioritization score 218. Various scales are readily devised for each of the scores 212, 214, 216, 218, as are various scaling factors. The prioritization score 218 thus represents a relative numbering or ranking of priority of a specific vulnerability of a specific asset 224, relative to other vulnerabilities and/or other assets 224… A relatively high prioritization score 218 suggests the vulnerability in the asset 224 should be addressed by remediation, (see col 8, lines 60-67, and col 9, lines 1-12) The vulnerability score could include, or be based on a base or temporal CVSS score, or both. This could be accompanied by a CVE ID, identifying a particular vulnerability in the asset for which the CVSS score is determined, (see col 9, lines 35-39), static tags such as CRITICAL DATA, WEB, INTERNET FACING, ADOBE_APP, INTERNET_EXPLORER_APP, etc. and dynamic tags such as such as VIRUS FOUND, INTRUSION DETECTED, etc., (see col 5, lines 34-58)) [In light of specification, CSM based score is a score quantifying risk and exposure, system disclosing vulnerability score (i.e., base score), contextual score and prioritization score (i.e., CSM based scores) in a single structured framework as limitation above]. Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Green to include a concept of wherein the CSM-based risk score is generated using a CSM, wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information as taught by Shubhabrata for the purpose of combining vulnerability score 212, and a contextual score 216 to form a prioritization score 218 which can be applied to indicate the impact or severity of vulnerability, so that vulnerabilities can be prioritized as to which ones need attention or remediation, [Shubhabrata: (col 6, lines 1-6)]. Regarding claim 12, Green and Shubhabrata teaches the media of claim 11, wherein the CSM is a scored representation of how each of the plurality of security issues are affected by corresponding contextual information (Green, FIG. 2, Each data source 205 is processed by the Active Engine 207 (i.e., a CSM model) to normalize the data then gets analyzed to establish security risk status (i.e., a plurality of security issues). The risk level of each server in the cluster/hierarchy of servers 210 is evaluated independently. Once a base level of risk is determined for servers 215, this base level are modified based on a variety of factors, [0044] Fig 6, table 1205 (i.e., the CSM matrix) containing data feed vulnerabilities (i.e., the plurality of security issues) vs frequency in days and risk thresholds (i.e., the contextual information and scores) [0048] in Fig 20, in table 2005, (i.e., the CSM matrix) tabulate normalized score from multiple data sources into composite risk, [0061] Fig 21 a base score is determined at step 2405 that using aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score is determined at step 2410 by averaging system security metrics to determine an overall mitigated risk level and/or mitigated risk level score. Confidentiality, integrity, and/or availability scores/multipliers is used to determine the mitigated risk level. At step 2415, the risk level can be escalated by one or more authorized users by setting one or more threshold frequencies, At step 2420, an impact assessment is determined, which determines remediation priority. At step 2425, an overall system risk (i.e., the security exposure) is determined based on the determined base level risk (i.e., security issue base-scores of security issues) , mitigated risk level, escalation, and/or impact assessment (i.e., contextual scores of instances of contextual information). For example, one or more of these values are averaged to determine the system level risk, [0062]) [Examiner interprets that active engine (i.e., a CSM model) analyzing data sources that provides security issues and contextual data (i.e., server specific details) to establish risk status and system further adjusting and refining the risk level by aggregating scores based on contextual data from data sources (i.e. multiple servers) and configuring those scores in the tabular format (i.e., CSM matrix) form as wherein the CSM is a scored representation of how each of the plurality of security issues is affected by corresponding contextual information]. Regarding claim 13, Green and Shubhabrata teaches the media of claim 11, wherein the CSM further comprises one or more recommended remediation actions that are mapped to the security issue, wherein a recommended remediation action is an actionable item that is performed to mitigate the security issue in the computing environment (Green, Fig 3, mitigation steps (i.e., a remediation action) is taken based on risks and impacts that have been identified. Mitigation is applied at the server level since it is a result of the environment where the server resides and the protection system installed on the server. The mitigation environment reflect varying levels of network, server, and data protection. For example, a high-risk environment 905 is protected by a few risk mitigating components such as white lists or a demilitarized zone/perimeter network. A server obtain a medium mitigated risk by being associated with a medium-risk environment 910, which can add additional mitigating components such as putting the server behind a first firewall, exposing it to antivirus protection, etc. These support and protection structures are layered to protect the server and/or system and mitigate the base risk. For example, all protections of the high-risk environment 905 automatically be present in the medium and low-risk environments 910 and 915. Mitigation data is inputted by an authorized user(s),[0045] the remediation priority algorithm is configurable by a user or organization. For example, a system with a medium-risk level but high security impact can be automatically assigned a higher remediation priority than a system with a high security risk level but a low security impact. Thus, remediation priorities 705 are set to automatically, and categorically, prioritize higher security impact systems, or set to prioritize higher risk level systems, depending upon user and/or organizational requirements, [0054] a user selects a particular PEO level 1510, which cause the system to display risk or other information associated with PEO 1510, [0058]) [Examiner interprets that system or user assigning different remediation priorities to mitigate different vulnerabilities based on their risk levels (i.e. security issues) as the CSM further comprises one or more recommended remediation actions that are mapped to the security issue, wherein a recommended remediation action is an actionable item that is performed to mitigate the security issue in the computing environment]. Regarding claim 14, Green and Shubhabrata teaches the media of claim 11, wherein the security posture visualization comprises the plurality of security issues, wherein the plurality security issues are associated with corresponding CSM-based risk scores and the contextual information (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. For example, a user selects a particular PEO level 1510, which cause the system to display risk or other information associated with PEO 1510. [0058], the low-risk designation for PEO 1 at 1510 is based upon the average score of the servers under PEO 1. Higher level risk designations are averages of lower servers, or alternatively the median risk level of lower servers, the highest server risk level of any of the lower servers, the mode of the server risk level (most frequently occurring), etc., [0059] The display 1500 with a series of concentric rings, where each concentric ring level represents a hierarchical level of the organization, with the highest selected level being displayed in the center or innermost ring. It automatically gets updated based upon determined risk levels. For example, the color, pattern, size, and/or visual indicator assigned to the displayed servers, systems, higher level organizational elements, etc., are based on the associated risk level. Such as color-coding low risk - color green, high risk - color red, apportioned different sizes such as higher risk elements – larger hierarchical ring, lower risk elements – smaller hierarchical ring, [0060]) [Examiner interprets that displaying risk posture or other information such as relevant risk levels related to different security vulnerability of different servers calculated based on vulnerability scores and color-coding based on associated risk levels (i.e., alerts) as the security posture visualization comprise s a plurality of security issues, wherein the plurality security issues are associated with corresponding CSM-based risk scores and contextual information]. Green does not explicitly teach: CSM based risk score However, Shubhabrata teaches: CSM based risk score (Shubhabrata, A vulnerability score 212, a threat score 214, and a contextual score 216 are combined to form a prioritization score 218. The prioritization score 218 can be applied to indicate the impact or severity of vulnerability, so that vulnerabilities can be prioritized as to which ones need attention or remediation, (see col 6, lines 1-6), A relatively high prioritization score 218 suggests the vulnerability in the asset 224 should be addressed by remediation. Vulnerability data 210 and/or threat information 208 can be consulted to guide the remediation effort, (see, col 9, lines 10-14)) [In light of specification, CSM based score quantifies security exposure (exploitability and impact), see instant application [0006], [0032], the prioritization score indicating the impact or severity as CSM based score] Same motivation applies as claim 11 Regarding claim 15, Green and Shubhabrata teaches the media of claim 11, the operations further comprising: receiving an indication to execute a recommended remediation action associated with the security issue, wherein the recommended remediation action is associated with the security posture visualization; and communicating the indication to execute the remediation action to cause execution of the recommended remediation action (Green, Fig 3, mitigation steps (i.e., a remediation action) is taken based on risks and impacts that have been identified. Mitigation is applied at the server level since it is a result of the environment where the server resides and the protection system installed on the server. The mitigation environment reflect varying levels of network, server, and data protection. For example, a high-risk environment 905 is protected by a few risk mitigating components such as white lists or a demilitarized zone/perimeter network. A server obtain a medium mitigated risk by being associated with a medium-risk environment 910, which can add additional mitigating components such as putting the server behind a first firewall, exposing it to antivirus protection, etc. These support and protection structures are layered to protect the server and/or system and mitigate the base risk. For example, all protections of the high-risk environment 905 automatically be present in the medium and low-risk environments 910 and 915. Mitigation data is inputted by an authorized user(s),[0045] the remediation priority algorithm is configurable by a user or organization. For example, a system with a medium-risk level but high security impact can be automatically assigned a higher remediation priority than a system with a high security risk level but a low security impact. Thus, remediation priorities 705 are set to automatically, and categorically, prioritize higher security impact systems, or set to prioritize higher risk level systems, depending upon user and/or organizational requirements, [0054] a user selects a particular PEO level 1510, which cause the system to display risk or other information associated with PEO 1510, [0058]) [Examiner interprets that user clicking the button to execute different remediation priorities and mitigate different vulnerabilities by applying different changes on the displayed visualization posture as receiving an indication to execute a remediation action associated with the security issue, wherein the recommended remediation action is associated with the security posture visualization; and communicating the indication to execute the remediation action to cause execution of the recommended remediation action]. Regarding claim 16, Green and Shubhabrata teaches a computer-implemented method, the method comprising: accessing a security issue associated with a computing device in a computing environment (Green, receiving, on an electronic network, security data corresponding to a security vulnerability of each of a plurality of servers, each of the plurality of servers being associated with a secured system, [0004] Fig 22, At step 2905, on an electronic network, security data are received corresponding to at least one security vulnerability associated with each of a plurality of servers, each of the plurality of servers being associated with a secured system, [0063]); generating a security posture visualization associated with the computing environment, wherein the security posture visualization comprises the security issue associated with a contextual security matrix (CSM)-based risk score that is generated using a CSM (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. The system displays risk or other information associated with PEO 1510, [0058], the low-risk designation for PEO 1 at 1510 is based upon the average score of the servers under PEO 1. Higher level risk designations are averages of lower servers, or alternatively the median risk level of lower servers, the highest server risk level of any of the lower servers, the mode of the server risk level (most frequently occurring), etc., [0059] The display 1500 with a series of concentric rings, where each concentric ring level represents a hierarchical level of the organization, with the highest selected level being displayed in the center or innermost ring. It automatically gets updated based upon determined risk levels. For example, the color, pattern, size, and/or visual indicator assigned to the displayed servers, systems, higher level organizational elements, etc., are based on the associated risk level. Such as color-coding low risk - color green, high risk - color red, apportioned different sizes such as higher risk elements – larger hierarchical ring, lower risk elements – smaller hierarchical ring, [0060]) [Examiner interprets that displaying relevant risk levels related to different security vulnerability of different servers calculated based on vulnerability scores as generating a security posture visualization associated with the computing environment, wherein the security posture visualization comprises the security issue associated with a contextual security matrix (CSM)-based risk score that is generated using a CSM]; wherein the CSM comprises a plurality of security issues, a plurality of instances of contextual information, and a plurality of contextual scores, (Green, may receive a plurality of security ratings from different data sources, which may be normalized to a single security rating standard. The security ratings may be received from a plurality of data sources, and may evaluate the security risk of data sources, servers, IoT devices, systems, networks, environments, other devices, etc., [0035] FIG. 20, risk data may be received from a plurality of data sources. For example, risk data may be received from ACAS, STIG, SCAP, Fortify, POA&M, etc. As shown in table 2005, risk data from one or more risk assessment data sources may be normalized to determine a composite risk score and/or risk rating, [0061]) [ Plurality of security issues (i.e., many vulnerabilities across many servers and multiple factors are used to compute scores)] wherein the CSM is generated using a CSM model that is a computational framework that algorithmically assigns base-scores to security issues (Green, FIG. 21 is a listing of formulas that may be used to determine risk levels and/or scores discussed herein. As discussed above, a base score may be determined at step 2405 that may be an aggregated normalized score, such as a summation of a plurality of normalized scores. A mitigated risk level score may also be determined at step 2410. For example, system security metrics may be averaged to determine an overall mitigated risk level and/or mitigated risk level score, [0062] a server security vulnerability score may be determined, for each of the plurality of servers, based on the security data corresponding to the at least one security vulnerability for each of the plurality of servers, [0063]) [Examiner interprets that system assigning a base score/risk score based on vulnerability/ security data and normalized rating (i.e., algorithmic scoring framework)]; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information (Green, may detect and categorize security vulnerabilities, and modify the assessed risk level of data sources, servers, Internet of Things (IoT) devices, systems, etc., based on the categorization. The assessed risk level may further be modified over time based upon predetermined rules, such as the time since discovery, and mitigation steps taken, [0034] an overall system risk may be determined that may be based on the determined base level risk, mitigated risk level, escalation, and/or impact assessment. For example, one or more of these values may be averaged to determine the system level risk, [0062]) [Green supports base score, and multiple modifiers/factors contributing to a final risk score]; and communicating the security posture visualization to cause display of the security posture visualization (Green, FIGS. 15-19 are example diagrams of aggregated server and/or system data at various organizational levels that are displayed to one or more users, for example on a heads-up dashboard (HUD) display 1500. A user can select a particular PEO level 1510, which causes the system to display risk or other information associated with PEO 1510, [0058]) [Examiner interprets that the user selecting to display risk posture or other information as communicating the security posture visualization to cause display of the security posture visualization]. Although, Green teaches structured scoring such as normalized inputs, multipliers/mitigators, formulas/steps to compute risk scores based on factors like mitigation environment, elapsed time, escalation thresholds and impacts is functionally similar to mapping (issue and context) to modified score, Plurality of security issues (i.e., many vulnerabilities across many servers and multiple factors are used to compute scores), base score, and multiple modifiers/factors contributing to a final risk score, and the logical representation of relationships between security issues, contextual information, and associated risk scores, Green does not explicitly teach an explicit matrix data structure: contextual security matrix (CSM)-based risk score that is generated using a CSM; wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores; wherein the CSM is generated using a CSM model; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information; However, Shubhabrata teaches: contextual security matrix (CSM)-based risk score that is generated using a CSM (Shubhabrata, A vulnerability score 212, a threat score 214, and a contextual score 216 are combined to form a prioritization score 218. The prioritization score 218 can be applied to indicate the impact or severity of vulnerability, so that vulnerabilities can be prioritized as to which ones need attention or remediation, (see col 6, lines 1-6), A relatively high prioritization score 218 suggests the vulnerability in the asset 224 should be addressed by remediation. Vulnerability data 210 and/or threat information 208 can be consulted to guide the remediation effort, (see, col 9, lines 10-14) The contextual module 310 tracks the workload context 102 of each of the assets 224, for example by establishing a data structure in memory and populating the data structure with information derived from the tags 202…. For each asset 224, and for each vulnerability considered by the vulnerability module 306, the contextual module 310 generates a contextual score 216. To generate the contextual score 216, the contextual module 310 correlates aspects of the specified vulnerability, e.g., from vulnerability data 210, and aspects of the asset 224, e.g., from metadata in tags 202. In some embodiments, for a specified asset 224 and a vulnerability specified by a CVE ID 220, the contextual module 310 determines whether the vulnerability matches the asset 224, (see, Col 8, lines 19-40)) [In light of specification, CSM based score quantifies security exposure (exploitability and impact), see instant application [0006], [0032], the prioritization score indicating the impact or severity as CSM based score and the data structure as the CSM]; wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores (Shubhabrata, When vulnerability data 210 is received by the computing device 302, the vulnerability module 306 tracks the association of CVE ID 220, CVSS score 222, severity and exploitability information (when available) for each such vulnerability or exposure, for each asset 224, so that these are correlated. For example, various associated entries could have links in a database, be on the same row or column in a table, or be listed sequentially in a file, etc. The vulnerability module 306 produces a vulnerability score 212, for each vulnerability for each asset 224, which could be the base CVSS score 222 or the temporal CVSS score 222 or a combination, (See col 7, lines 18-29) static tags such as CRITICAL DATA, WEB, INTERNET FACING, ADOBE_APP, INTERNET_EXPLORER_APP, etc. and dynamic tags such as such as VIRUS FOUND, INTRUSION DETECTED, etc., (see col 5, lines 34-58), The contextual module 310 tracks the workload context 102 of each of the assets 224, for example by establishing a data structure in memory and populating the data structure with information derived from the tags 202. Static tag information 204 and dynamic tag information 206 may be included in the workload context 102…For each asset 224, and for each vulnerability considered by the vulnerability module 306, the contextual module 310 generates a contextual score 216. To generate the contextual score 216, the contextual module 310 correlates aspects of the specified vulnerability, e.g., from vulnerability data 210, and aspects of the asset 224, e.g., from metadata in tags 202, (see col 8, lines 19-34)) Examiner interprets that system disclosing multiple vulnerabilities ( i.e., plurality of security issues) , multiple tags (i.e., a plurality of instances of the contextual information) and multiple contextual scores based on asset (i.e., a plurality of contextual scores) and storing in the data structure (i.e., the CSM) as limitation above]; wherein the CSM is generated using a CSM model (Shubhabrata, system and method employ an algorithm that correlates vulnerabilities with contextual information such as threat data and virtualization tags (e.g., as provided in the virtualization environment by a vendor such as VMware, etc.). The algorithm works on a three-dimensional (or three axis) model in some embodiments. The three dimensions are summarized below: Dimension#1—Vulnerability (e.g., as reported by vulnerability assessment products). Related data could include base/temporal CVSS score, common vulnerabilities and exposures identifier (CVE ID), severity, etc. Dimension#2—Threat (e.g., threats received from Threat Intelligence systems such as DeepSight). Related data could include threat impact, impacted CVE ID, type of threat, operating system impacted, applications impacted, etc. Dimension#3—Workload Context: Tags (e.g., Operational Tags as well as Security Tags, i.e., static tags and dynamic tags, as defined in a virtualization environment using VMware, etc.) (See Col 3, lines 43-67) The contextual module 310 tracks the workload context 102 of each of the assets 224, for example by establishing a data structure in memory and populating the data structure with information derived from the tags 202…. For each asset 224, and for each vulnerability considered by the vulnerability module 306, the contextual module 310 generates a contextual score 216. To generate the contextual score 216, the contextual module 310 correlates aspects of the specified vulnerability, e.g., from vulnerability data 210, and aspects of the asset 224, e.g., from metadata in tags 202. In some embodiments, for a specified asset 224 and a vulnerability specified by a CVE ID 220, the contextual module 310 determines whether the vulnerability matches the asset 224, (see, Col 8, lines 19-40)) [In light of specification, CSM model requires a model that defines scoring relationships, examiner interprets that 3 axis correlation algorithm defining the relationship between base scores, vulnerabilities and their workload context or contextual information as the CSM is generated using a CSM model]; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information (Shubhabrata, The prioritization module 312 of FIG. 3 cooperates with the vulnerability module 306, the threat module 308 and the contextual module 310, to produce the prioritization score 218 from the vulnerability score 212, the threat score 214 and the contextual score 216. In one embodiment, the prioritization module 312 multiplies the vulnerability score 212, for a particular asset 224 and a particular vulnerability (e.g., as identified by a CVE ID 220), the threat score 214, for the asset 224 and the particular vulnerability, and the contextual score 216, for the asset 224 and the particular vulnerability. This result can then be scaled, e.g., by dividing by a predetermined number, to produce the prioritization score 218. Various scales are readily devised for each of the scores 212, 214, 216, 218, as are various scaling factors. The prioritization score 218 thus represents a relative numbering or ranking of priority of a specific vulnerability of a specific asset 224, relative to other vulnerabilities and/or other assets 224… A relatively high prioritization score 218 suggests the vulnerability in the asset 224 should be addressed by remediation, (see col 8, lines 60-67, and col 9, lines 1-12) The vulnerability score could include, or be based on a base or temporal CVSS score, or both. This could be accompanied by a CVE ID, identifying a particular vulnerability in the asset for which the CVSS score is determined, (see col 9, lines 35-39), static tags such as CRITICAL DATA, WEB, INTERNET FACING, ADOBE_APP, INTERNET_EXPLORER_APP, etc. and dynamic tags such as such as VIRUS FOUND, INTRUSION DETECTED, etc., (see col 5, lines 34-58)) [In light of specification, CSM based score is a score quantifying risk and exposure, system disclosing vulnerability score (i.e., base score), contextual score and prioritization score (i.e., CSM based scores) in a single structured framework as limitation above]; Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Green to include a concept of contextual security matrix (CSM)-based risk score that is generated using a CSM; wherein the CSM comprises a plurality of security issues, a plurality of instances of the contextual information, and a plurality of contextual scores; wherein the CSM is generated using a CSM model; wherein the CSM comprises CSM-based risk scores, the base-scores of the security issues, and the plurality of contextual scores of instances of the contextual information as taught by Shubhabrata for the purpose of combining vulnerability score 212, and a contextual score 216 to form a prioritization score 218 which can be applied to indicate the impact or severity of vulnerability, so that vulnerabilities can be prioritized as to which ones need attention or remediation, [Shubhabrata: (col 6, lines 1-6)]. Regarding claims 17-20, Claim 17-20 recite commensurate subject matter as claim 4, 8, 6, and 10 respectively. Therefore, they are rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 12518021 B1: “relates to cybersecurity, and particularly agentless cybersecurity solutions for cloud environments” US 20240039954 A1: “ relates to calculating cybersecurity risk factors and exhibiting information related to risk factors and recommended remediation actions on an interactive Graphical User Interface (GUI) display” US 20150106867 A1: “relate to the field of network security techniques. In particular, various embodiments relate to security information and event management (SIEM) based on asset attributes of a network” US 20180219888 A1: “relates to intelligence generation and activity discovery from events in a distributed data processing system” Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMIKSHYA POUDEL whose telephone number is (703)756-1540. The examiner can normally be reached 7:30 AM - 5PM Mon- Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.N.P./Examiner, Art Unit 2436 /TRONG H NGUYEN/Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

May 18, 2023
Application Filed
Jan 24, 2025
Non-Final Rejection — §103, §112
Apr 08, 2025
Interview Requested
Apr 15, 2025
Applicant Interview (Telephonic)
Apr 16, 2025
Examiner Interview Summary
Apr 30, 2025
Response Filed
Jul 26, 2025
Final Rejection — §103, §112
Nov 04, 2025
Request for Continued Examination
Nov 08, 2025
Response after Non-Final Action
Feb 05, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591663
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 31, 2026
Patent 12470379
LINK ENCRYPTION AND KEY DIVERSIFICATION ON A HARDWARE SECURITY MODULE
2y 5m to grant Granted Nov 11, 2025
Patent 12452254
SECURE SIGNED FILE UPLOAD
2y 5m to grant Granted Oct 21, 2025
Patent 12341788
NETWORK SECURITY SYSTEMS FOR IDENTIFYING ATTEMPTS TO SUBVERT SECURITY WALLS
2y 5m to grant Granted Jun 24, 2025
Patent 12292969
Provenance Inference for Advanced CMS-Targeting Attacks
2y 5m to grant Granted May 06, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
99%
With Interview (+80.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month