09/27/2021 | News release | Distributed by Public on 09/27/2021 10:28
Minimizing Time To Remediate (TTR) is becoming one of the key metrics of security program effectiveness. This holistic measure represents many capabilities and is a good validation of your risk mitigation capacity because it captures how quickly you can respond to the most critical vulnerabilities and threats in your environment
One key factor that impacts TTR the most is your ability to prioritize the remediation actions. Prioritization is the zen art that answers two difficult questions: "Where do I begin?" and "How do I continue?"
Risk-based and threat-based prioritization have generated big hype, with different technologies providing a score representing the risk rating to be used for prioritization purposes. Risk is complicated and multi-faceted, often quantified in different ways. The challenge is to understand what the risk rating represents and the extent your business and organization align with that logic.
Let me summarize an interesting conversation I had with Andrea Piras, a cybersecurity analyst working for a transportation company in Sardinia, Italy about how Qualys helps translate a complex theory into actionable business advantage.
As a thought experiment, Andrea and I walked through a theoretical calculation of different types of risk perspective that shows the challenges of this analytical approach to prioritization:
There are also several methods to calculate risk: qualitative, quantitative, and semi-quantitative.
Taking for example a standard quantitative method for calculating risk, we count on time and resources to collect statistical and impact analysis data. Statistical could mean how many times an event happened during a year; impact analysis relates to how much loss a service stop causes, e.g. 1000€/hour. Thus, we translate risk into significant numbers to support strategic decisions using parameters such as EF (Exposure Factor), SLE (Single Loss Expectancy), ARO (Annualized Rate of Occurrence) and ALE (Annualized Loss Expectancy).
For a given asset, the forecasted loss is given by the formula:
Statistical could mean how many times an event happened during a year; impact analysis relates to how much loss a service stop causes, e.g. 1000€/hour.
Thus, we translate risk into significant numbers to support strategic decisions using parameters such as EF (Exposure Factor), SLE (Single Loss Expectancy), ARO (Annualized Rate of Occurrence) and ALE (Annualized Loss Expectancy).
For a given asset, the forecasted loss is given by the formula:
* SLE = asset value x EF
* SLE measures a threat impact on the given asset (a server, a framework, a repository of data, a service).
* ARO measures the yearly frequency of the threat.
* SLE = asset value x EF SLE measures a threat impact on the given asset (a server, a framework, a repository of data, a service). ARO measures the yearly frequency of the threat. The Annual Loss Expectancy is therefore given by the formula
* ALE = SLE x ARO
Let's put the theory in numbers, supposing that an organization runs an e-commerce service invoicing 1 M€/year; the service consists of hardware, software, and the people running the service.
We assume that a DDoS attack, blocking sales and the productivity of the operating personnel, have an Exposure Factor of 5%.
We also know that this attack has happened 6 times in the last 3 years, therefore ARO=6/3 => ARO=2.
Based on the theory exposed above, we have
* SLE = 1.000.000€ x 0.05 = 50.000€
* ALE = 50.000€ x 2 = 100.000€
According to the quantitative risk method described, the organization expected loss is 100.000€/year.
To simplify, a possible remediation plan based on mitigation could be to install a next-generation firewall or IPS to contrast these DDoS attacks with an estimated cost of 50.000€ + 5.000€/year maintenance.
The challenge with this approach is that it is hard to accurately estimate the inputs, and a simple equation does not account for the statistical variation that should be expected in a real-world situation.
When dealing with vulnerabilities and potential exploitation in a modern, diverse digital ecosystem, the complexity of the problem is amplified enormously; this requires a different approach to derive more useful conclusions.
The most effective approach is to describe what worries me, leveraging a technique to help convert these perceptions into prioritizing factors. This technique is more holistic: You describe the effects you are concerned with (DDoS, wormable infection) and the attack surface you are concerned with (e.g. internet-facing systems), and let Qualys VMDR propose the most effective remediation based on data from your own environment.
Keep in mind that we're trying to answer the question "Where do I begin [with my remediation efforts]?".
Let's see some examples.
The screenshot here below illustrates this perceived risk description approach in Qualys VMDR.
Once you describe the perceived threat, you need to add additional context: patch awareness. Which patches are available? How many of them are already installed and where? Is there patch supersedence to consider?
Also, do not forget that misconfigured systems can create vulnerability, especially in cloud workloads, where you have shared responsibility. If I configure storage on AWS or Azure and I forget to restrict the IPs able to access it, I'm risking a data leakage; if I forget to enable multifactor authentication on a computing instance, the consequences could be even worse.
Very often the remediation activity (patching, configuration change, compensating controls deployment) is performed by different teams outside vulnerability lifecycle management, and you may need an interdepartmental integration to avoid conflicts and inefficiencies. You need to foster APIs, role-based access control, proper rights management, and you will see the operational velocity and effectiveness to increase tangibly.
Finally, aim to build observability: convert raw metadata into traceable and actionable information. For example, aggregating vulnerabilities detected in the last 30 days, then from 30 to 60, and from 60 to 90, so you can make decisions and take action around the age of your vulnerabilities and the responsiveness of your patching program. For each category, map the existing patches to highlight where a ready-made exploit is already available.
Make this dashboard dynamic with trending information, and you will track the patch program efficiency over time.
Replicate this approach with information that is relevant for SecOps, IT, and Compliance departments and before you know it, you will have created a fluid, agile situational awareness that everyone will admire.
Building a modern, risk-based security program is not an impossible dream: Qualys delivers a platform to empower cybersecurity asset management, where both prevention / remediation and detection / response capabilities are grounded. Prioritizing your actions with a descriptive perceived-risk approach delivers the needed effectiveness and operational velocity, while exposing a strategic situational awareness that will make security a true business enabler.