I have a new, longish post on the topic, Shostack + Associates > Shostack + Friends Blog > Risk Management and Threat Modeling. Eager for your feedback.
Really great points, and I like the bridge building analogy. I actively try to avoid as much ârisk assessmentâ as possible when it comes to threat modelling, as itâs a rabbit hole that is too easy to go down and often doesnât actual provide much value (in my experience).
I tend to focus more on has this thing (that is being built and threat modelled) been âsecurity engineeredâ well (much like a safety engineer would review a bridge). If there are basic security engineering things missing then they need to be fixed. I tend to think about it more like the thing should achieve a certain bar of security - call it a standard, requirements, whatever you want. Often the âbarâ is not explicitly defined, but that can be defined in time, and security best practices are available for most things anyway.
Of course technical and business constraints exist that often mean the system canât follow a best practice, and this is where you would like a risk assessment to help, but it rarely does (in my experience). Better to focus on applying good engineering to solve the problem as best as possible, and if needs must, capture the threats in sufficient detail to explain them to someone who owns the risk so they can make a judgement call (and no, I donât think doing that âcapturingâ in most risk assessment frameworks helps).
I like how you express it in such simple terms that anyone can understand. It takes me back to the time I was reading â Waltzing with Bears: Managing Risk on Software Projectsâ by T. DeMarco and T. Lister.
The message in your post echoes a very similar point made in the book, emphasizing the importance of proactive risk management in software development, a threat model isnât explicitly named, but advocating for a proactive approach to risk management means that you effectively are doing threat modeling, in my view.
thanks! I think we need to call attention to the fact that a great many of us are noticing that risk assessment âdoesnât actual provide much valueâ and stop wasting energy on it.
Thanks, @adamshostack , for posting this. I didnât understand some of your previous talk about ânot mixing TM and risk managementâ. That one clarifies your point a lot.
If what you fix is not guided by risk assessmentâŠ
Wonât that create All-critical bias?
Sure, we could fix 1000 âeasily addressedâ things, but that will also be expensive in sum.
Kleinvieh macht auch Mist.
Even small animals produce manure.
What are your thoughts?
Also loved (and agree) how @steve_gibbons commented at LinkedIn:
Iâd argue that the âfix itâ activity is often âevaluate priority and add to the backlog queue.â (Yes thatâs easy to get wrong.)
So doesnât priority bring risk assessment back into the game anyway?
I donât think the risk assessments are the issue. I use risk assessments in the same way as I use threat modeling, I focus on what can go wrong and address the potential risk and what we can do about them and we do it as the team is starting the work, not after, and the whole team is always there. That means that we effectively are doing the threat modeling and risk assessment together. In this way, our risk assessments process is very much the same as our threat modeling process. The big difference is how the document looks. The rest is effectively a threat modeling session with risk management jargons. We also do pure threat modeling as well. It all depends on whether we need a drawing or not.
What I do see is that I have to drive the risk assessments much more then I have to drive the threat modeling session. Risk management wouldnât happen if I wasnât there. The software team asks me to do the session together with them though. The risk management happen when the team comes back to me and tells me that there is something they canât do because of some technical difficulties. Thatâs when we get to have the risk management discussion.
Glad that this helps clarify. I believe we have the problem you point out today, with risk management being applied, and so first (respectfully) assert that your comparison is unfair.
Second, I think risk management makes it worse because its effort that doesnât meet the needs of those doing the prioritization, and maybe we should free up that energy to be invested in more helpful ways.
I explicitly create a model in which we are not âeffectively are doing the threat modeling and risk assessment together.â Like any model, we can judge its usefulness. Iâm fine with you not seeing it as useful, but I do ask that you engage with âwhat if we treat risk quantification as separate.â If thatâs useless to you, I apologize for wasting your time. I think its useful which is why I shared it.
I do see the usefulness. Itâs just that I have never really spent much time on risk quantification or arguing with others over whether threats or risks needs to be mitigated or accepted, but that is probably due to my fortunate circumstances or lack of experience. I have no problem in seeing that it is quite useful under other circumstances and will keep it in the back of my mind.
I think that is an unnecessarily strong statement and, respectfully, an oversimplification that only leads to organizational silo-building.
I do agree with the article on the point that linking technical threats to business risks can be hard, but I also think this should not hold us back from tryingâand getting better at it with practice.
If you manage to establish the connection, you benefit from showing how your threat modelling process contributes to reaching business priorities and potentially gain far more management buy-in. Depending on the organizational culture you operate in, this can be the decisive factor in the success of your threat modeling efforts.
Another point, if I understood correctly from the article, was that many of the identified threats do not require much further thought in terms of risk assessmentâas they are either trivial to implement or so grave in their impact or likelihood that they demand immediate attention anyway.
Fair point. But still, wouldnât the threat modelling process benefit if we had the means to show how often it helps identify high-impact, low-hanging fruits with low-cost implementations? If anything, that would be a great argument to support the process, so I see it as an opportunity lost not to communicate these results in a structured way.
The Cyber Resilience Act is becoming more and more prominent in discussions around the importance of threat modelling. CRA talks about risk management, and we claim that threat modelling is the natural way to do itâor rather, we donât see an alternative. I think the argument has its merits, and I actually support it. However, CRA still refers to risk management as a required activity, so in order to argue that threat modelling is our means to do it, we should do our homework and establish a clear connection between the two processes. One way to do this is to do the hard work of linking (at least a subset of) technical threats to business risks, properly assess them, and feed them into the risk management process. Also, I donât currently see any other obvious way to actually demonstrate to a non-SME auditor that threat modelling fulfills the CRA requirements.
I agree with @hewerlin that prioritizing something in your backlog is also a form of risk ratingâjust informal and non-transparent. It still takes effort, however, so Iâm somewhat unconvinced that a lot of time and energy can be saved by not documenting the thinking behind the priorities. I believe the real savings would actually come from becoming better (as in more efficient) at doing the assesment with quantititive methods and navigating away from qualitative assesments (low-medium-high) that indeed, have marginal utility.
Thanks Daniel. I do think the silos are there, between âengineeringâ and âriskâ and more, thatâs appropriate, because if you let people set risk numbers, thereâs a chance theyâll manipulate them, and so many orgs want it siloâd.
To your metrics point, we agree, and I donât see how a risk/threat breakdown changes that. Maybe the addition of the words âhigh impactâ, but i think you can get there with âOMG this would have stankâ without quantification.
Lastly, you make a great point about CRA. âwe should do our homework and establish a clear connection between the two processes.â My thinking, and Iâm speaking with folks about this, is to push threat modeling within the standards. Thatâs complicated because you need people who want to spend time in standards bodies, but I donât see a way around that.
I have observed some of the following consequences, simplified. I have written about them in more detail in the TM of TM - ID numbers from that model:
No risk assessment 3.1.1 â All critical bias 3.1.7
All critical bias â Unfeasible / high effort mitigations 3.2.1
Unfeasible high effort mitigations â Undone mitigations 3.3.3
Undone mitigations â Insecurity â Actual damage
All critical bias â (Insane and) untrusted rating scheme 3.1.8
I believe we need something sane that tells us when not to waste our energy (because unlikely / not so bad / protection suffices / âŠ).
How do you consider this an unfair comparison?
Off topic, but still, I donât think the motive behind neglecting prioritizing security is inaccurate or lacking risk quantification. In my experience, the main challenge is a lacking understanding for the value of threat modeling and risk management. You canât fix that with better or more information because the inherent issue is the perception that it takes a lot of resources, time and doesnât add value. Using more time and resources wonât help you. I would instead go the other way. Why? The threats and mitigation are usually well understood and easily applied. The challenge is convincing the organization that it should be done on a daily basis and that the inconvenience is worth it. You do that by making it trivial to do it, not by bringing more complexity to the table.
âSure, we could fix 1000 âeasily addressedâ things, but that will also be expensive in sum.â < We have this problem today, adding expensive risk management work doesnât get those easily addressed things fixed.
Ah. How about
Less busywork + cost of effective risk assessment that doesnât suffer from all critical
<
A lot of work
<
A lot of work + cost of ineffective risk assessment
?
I donât know I even believe this: âLess busywork + cost of effective risk assessment that doesnât suffer from all criticalâ
Maybe it would help to know: Which threats are you going to accept as ânot important?â (And would you put the list on your website? )
My experience has been that risk quantification is usually something that comes up when someone donât want to do the extra work and usually after the thing has been built. I try to avoid it altogether. Itâs better for me to use critical, high, medium and low because that means I can push the team to increase the priority without having to go to much into the details as to why the priority needs to increase. This is why, where I currently work, it doesnât matter that we mix risk management and threat modeling. Nobody asks me about how critical it is before itâs so critical that they need to do a hot fix.
Like you hint at Publish your TM.
Itâs usually one of the three causes: unlikely (strong pre conditions) / low impact (weak post conditions) / already defended.
When I teach âwhat would ruin a day at the beachâ toy example, I sometimes get shark attack or
Tsunami. Yes, that would really suck. But we will most likely have a great day at the beach without Tsunami when we just accept. Same with technical risks, some of which are ridiculous.
Threats with strong preconditions are another case. Letâs give an example: Apps usually have high impact threats after account takeover or server takeover. We may mitigate for additional harm reduction. But those takeoversâ protect / detect / respond should have been considered anyway and hopefully been made unlikely. So account / server takeover defense may already be sufficient defense for those follow-up threats.
Are threats with strong pre-conditions any different from threats with low likelihood?