In my head: Same. Their strong pre-conditions make them unlikely. Condition wording focuses on the conditions themselves not only their likelihood. Like in “Why is this unlikely? Because A, B, C” or GIVEN WHEN THEN language.
What I would say about the risk assessment of threats (and I’m just talking about the scenario where a threat model has been done and you have a list of threats) is that fundamentally what do you need to do with a list of threats? You need to decide if you are going to expend the resources to mitigate those threats.
Using a risk framework to assess a threat is really just a means to answer that question. I often get the impression that people see risk assessment as some mandatory part of dealing with threats, that it’s essential for some reason. I would say it’s not essential, and that you can often decide “what” and “if” you are going to do, without a risk assessment. Sometimes risk assessments (of some type) do need to happen though, generally beause you have competing requirements or you need to convince someone of something.
However, and this is where I’ll present some of my more controversial opinions, all the popular approaches to risk assessment that I have seen really fail to stand up to any (or at least my) scrutiny:
- The first reason I say that is most risk assessment approaches I have seen rely on determing likelihood and impact. In qualitative approaches people just guess based on their knowledge. In quantitative approaches we don’t have good data on either likelihood or impact, so approaches like FAIR make you ask smart people, then you crank their opinions through math, to get numbers. This to me is no different to a qualitative approach, as “garbage in, garbage out” applies, and the input is still people’s guesses.
- My even more controversial opinion is that even if we had good industry data, it doesn’t help to solve the actual problem we have. Briefly, the data is useful for threats that are highly likely e.g. SQL injection, XSS etc., but the actual problem we face are the low likelihood events with high impact. Why can’t we use industry data to guide us? The reason (or at least my argument) is that data about other companies liklelihood/impact of threats doesn’t tell us about our own companies likelihood/impact of threats - there are too many factors across companies that influence threats (or for a more mathematical take on this, read about how ensemble statistics can fail). The only thing that I’ve found in my research that lets us handle low likelihood/high impact threats is to treat them as Black Swans. I did warn you it might be controversial! But there is a lot of things to dig into there even if you disagree with me.
For an (unfortunate) timely example of whether risk assessment is useful, consider this recent company that had to close due to a ransomeware attack. What would a risk assessment have told that company, the probability distribution of the cost of a ransomware incident? Would have that prevented the attack? What we do know is that company can’t contribute to future years’ industry data set of incidents, because that company doesn’t exist any more.
I think one of the main misunderstandings that most people have about risk quantification and risk management is that if they are being convincing enough, someone will delegate more resources to fix it. They won’t. Resources will be taken from somewhere else for the simple reason that you can’t just hire people out of the blue. The effect of which means that the risk have to be pretty critical and highly likely in the near term for it to be prioritized over the next 6 months. Nothing less then catastrophic will get immediate attention.
Critical and highly likely, near term risks almost never appear on your radar before it is too late.
Still despite that, it’s fully possible to mitigate risk with a low likelihood and low impact immediately you just need to make the development teams and operational team do threat modeling and secure design instead of risk management. I do this all the time.
I like your Black Swan hint, and especially that excerpt from the link:
The book asserts that a “Black Swan” event depends on the observer: for example, what may be a Black Swan surprise for a turkey is not a Black Swan surprise for its butcher. Hence the objective should be to “avoid being the turkey”, by identifying areas of vulnerability in order to “turn the Black Swans white”.
Seems like there is some acceptance for the “lazy risk evaluation” (lazy like in lazy evaluation ) which is at the core of @adamshostack s suggestion!