It’s a threat model editor I’ve been using for my own stuff and now want to launch for the Community!
It has a focus on simplicity and quick start + sharing.
Some of the use cases I’ve used it for:
jot down a few threats and mitigations real quickly
share a threat modeling space with a friend
start training with a quick real-world scenario threat modeling and capture results.
show a friend what’s that cool threat modeling thing you have been so passionate about? Start by sharing a link to a sweet threat model you created for them.
I would love it if you added a section for system description (basically an equivalent of " What are we working on?"). It makes the TM easy to read when shared.
Hi @adamshostack, good idea, I add it to my backlog. Will add discourse named emojis with native browser rendering… Unicode emojies are already working…
This encourages / teaches a propose-then-choose flow for mitigation planning, where we first propose all sorts of ideas, then favor/reject or mark realized.
This encourages / teaches a propose-then-choose flow for mitigation planning, where we first propose all sorts of ideas, then favor/reject or mark realized.
great. This is very much aligned with my experienc e that we need to capture the development status. See, for example, Figures 5 & 6 (and related text) at Redirecting.
Yes, it is an educational tool. And I believe this provides for the opportunity to discuss important points:
propose-then-choose is one of the things that really distinguishes explicit threat modeling from doing things intuitively and picking the first thing that comes to mind. We can have rich discussions about different opportunities, their properties and what to choose.
it is in the core nature of threat modeling that a lot of the things that protect our system are at first undone and undone mitigation protects from nothing. So we can discuss how threat modeling and realization work together and what is the role of feasibility.
I was thinking about adding more selects
how strong is the protection / /
what is the cost / effort: / /
… to support discussion about different mitigations. But that probably over-simplifies and delutes the tool…?
I’m facing similar challenges (protection strength + cost/effort).
For now, I’ve added characteristics to security controls / mitigations (and other elements). These characteristics can be user-defined, so that users can incorporate their own enumerations regarding the effectiveness/cost/effort of controls. I find this somewhat in line with NIST’s OSCAL, which allows to define parameters for the descriptions of security controls, though I doubt that anyone uses this capability to actually say support decision making about the controls (which is my intention). BTW, if someone does use OSCAL to assess/analyse security controls cost/effort/etc. (and not just for describing the controls), I would love to know!
Yes custom is always cool for the thing you really use, like OTM with it’s attributes map. God only knows if they measure in hours, T-Shirt size, s or whatever.
I love the simplicity (but) I’d suggest more granularity in the rating scales. Low, medium, and high are great for first swag, but (for example) real world likelihood/frequeny can vary from:
as unlikely to occur as an extinction-level asteroid strike to
occurs thousands of times per minute/second
And real world “how bad” can vary from:
Trivial, walk it off to
Extinction level event
Cost is one of the easiest to quantify:
Pocket Change through
Decades of GDP
I’ve used logarithmic scales In the past to help with some pretty huge dynamic ranges and also made creative use of adverbs (eg. extremely unlikely) but I don’t have any better suggestions than those.
low / medium / high is an educational choice. It conveys the idea. It intentionally gets people upset, because it’s so fuzzy, so we can discuss how we would even tell.
When I designed it, I had a different approach with a full width red - yellow - green gradient bar. You could place dots wherever you liked, simulating the 0% - 100% from OTM.
I also had a collaborative thing where multiple opinions where possible.
People didn’t quite like it. Would you?
I thought about having space settings so people can choose between different setups. But there’s a tradeoff between flexibility and simplicity.
The biggest challenges with the HML scale isn’t the HH or LL combinations but is in the other two corners with extremes within the highs against extremes within the lows. In those cases, risk can be very overstated or very understated. This is easiest to see if one assigns a quantified range to each HML bucket then does the math on each of the 36 sub-corners of the 9x9 grid with the problem rearing its head when zero is multiplied by infinity.
It also rears up in larger grids but isn’t (in my experience) encountered as often when there are more than 9 total buckets, but it does occur.
For collaboration, I prefer a ranked voting scheme to prioritize the top N issues that are then rated and subsequent votes to break deadlocks/contention on risk ratings (if they occur.). This allows different thinking styles and areas of concern and degrees of expertise to participate equally. If the contention was huge, using the minority opinion approach in documentation can preserve group harmony and still keep momentum.
I would recommend you to check out the book How to measure everything in cybersecurity risk. Specifically chapter 3. It introduces a simulation based approach to generate something called Loss Exceedance Curve.
It gives you the probability, that your losses would exceed a certain amount. Say your insurance value.
Doesn’t really matter how fine you tune the discreet scale it will stay an ordinal value scale with all the methodological issues: one assumes certain properties of the values, like additivity, or distance just because they are represented by numbers. However these are not valid.
With the simulations behind LEC you get exactly these properties: they can be added up, the LECs capture a sense of distance and able to express the extreme values.
@avish I find them very well suited for decision support.
Loss Exceedance Curve = Probability of Exceedance form insurance math, it is a curve.
FAIR’s Loss Event Frequency and the Loss Magnitude are asfaik point estimates. LEF gives you simply a probability of a loss event happening. Like 0.7. The same for Loss Magnitude: 70$.
A LEC tells you the relationship between arbitrary loss value and the associated probability of exceeding that value.
70% for exceeding 100k
20%0for exceeding 300k
…
2%=for exceeding 10 million
(Just as a curve not as a list)
There is no such thing as “simple qualitative risk” …simply fallacies, that are inherent to all qualitative models what are not obvious first and make you believe that you are doing something sound😜
Understand (teaching) quantitative risk is not hard: the basics can be communicated in a session about one hour.
Getting people unstuck on some methodological misunderstandings like there is an objectively measurable probability of an event and give there is NOT one still can and should create models as a means of decision support takes some getting used to.
BUT the tricky part is mechanising the calculations. It involves just the level of math most tool creators seem to be uncomfortable with and afaik there are not many reliably libraries offloading the effort and making some related takes simple.
Actually I am working on one, but it’s in rather alpha state
FWIW, with a few caveats that I’ve already mentioned, I have no problem with qualitative risk analysis for the quick and dirty first level prioritization phase - triage, if you will. It helps avoid mind numbing discussion on the gory details of lower risk threats when that time (and energy) could be better spent on the risks that are more likely to kill the patient soonest and deserve the deeper dive.