Meet ThreatPad!

Meet ThreatPad - the new kid on the block:

It’s a threat model editor I’ve been using for my own stuff and now want to launch for the Community! :gift:

It has a focus on simplicity and quick start + sharing.

Some of the use cases I’ve used it for:

  1. jot down a few threats and mitigations real quickly
  2. share a threat modeling space with a friend
  3. start :tm: training with a quick real-world scenario threat modeling and capture results.
  4. show a friend what’s that cool threat modeling thing you have been so passionate about? Start by sharing a link to a sweet threat model you created for them.
  5. threat model with nothing but your phone.
  6. meta threat model together

Please let me know if you have any feedback!

Happy new year 2025! :fireworks: :fireworks: :fireworks: :partying_face: :partying_face: :tada: :tada: :tada:

P. S.: Sorry, no fancy AI this time. :robot: But fancy End-to-End Encryption. :closed_lock_with_key: :stuck_out_tongue_winking_eye:

4 Likes

Great job. Love the simplicity :slight_smile:

I would love it if you added a section for system description (basically an equivalent of " What are we working on?"). It makes the TM easy to read when shared.

1 Like

Hi @sandesh ,

would a single textarea suffice or what are you thinking of?

I have hidden settings where you can set the title. I also experimented with integrating draw.io for a diagram.

See Settings example - ThreatPad or put an S between the first and second colon in the URL hash fragment - # … :S: …

Hendrik

Cool! I love the simplicity. What about :emoji: (slack style entry, :wink: also works here on tm connect.)

1 Like

Hi @adamshostack, good idea, I add it to my backlog. Will add discourse named emojis with native browser rendering… :dancer: Unicode emojies are already working… :slight_smile:

:new_button: :new_button: Two little enhancements for ThreatPad!

(1) Mitigations can now have status.

This encourages / teaches a propose-then-choose flow for mitigation planning, where we first propose all sorts of ideas, then favor/reject or mark realized.

Simply click on the mitigation to edit.

(2)

Mitigation input placeholder now shows ideas inspired by NIST Cybersecurity Framework strategies: :locked: prevent / :rescue_worker_s_helmet: mitigate / :video_camera: detect / :fire_extinguisher: respond / :adhesive_bandage: recover

1 Like

I’m also working on a #hashtag system for categorization (Think #InformationDisclosure / #Database / #RogueUser )…

I could use some feedback. If you want to beta-test, leave a :heart: or let’s DM.

1 Like

This encourages / teaches a propose-then-choose flow for mitigation planning, where we first propose all sorts of ideas, then favor/reject or mark realized.

great. This is very much aligned with my experienc e that we need to capture the development status. See, for example, Figures 5 & 6 (and related text) at Redirecting.

1 Like

Yes, it is an educational tool. And I believe this provides for the opportunity to discuss important points:

  1. propose-then-choose is one of the things that really distinguishes explicit threat modeling from doing things intuitively and picking the first thing that comes to mind. We can have rich discussions about different opportunities, their properties and what to choose.

  2. it is in the core nature of threat modeling that a lot of the things that protect our system are at first undone and undone mitigation protects from nothing. So we can discuss how threat modeling and realization work together and what is the role of feasibility.

:slight_smile:

I was thinking about adding more selects

  • how strong is the protection :flexed_biceps: / :flexed_biceps: :flexed_biceps: / :flexed_biceps: :flexed_biceps: :flexed_biceps:
  • what is the cost / effort: :moneybag: / :moneybag: :moneybag: / :moneybag: :moneybag: :moneybag:

… to support discussion about different mitigations. But that probably over-simplifies and delutes the tool…? :thinking: :red_question_mark:

I’m facing similar challenges (protection strength + cost/effort).
For now, I’ve added characteristics to security controls / mitigations (and other elements). These characteristics can be user-defined, so that users can incorporate their own enumerations regarding the effectiveness/cost/effort of controls. I find this somewhat in line with NIST’s OSCAL, which allows to define parameters for the descriptions of security controls, though I doubt that anyone uses this capability to actually say support decision making about the controls (which is my intention). BTW, if someone does use OSCAL to assess/analyse security controls cost/effort/etc. (and not just for describing the controls), I would love to know!

1 Like

Yes custom is always cool for the thing you really use, like OTM with it’s attributes map. God only knows if they measure in hours, T-Shirt :t_shirt: size, :moneybag:s or whatever.

ThreatPad makes educational choices that help convey ideas. :star_struck:

:new_button: New Feature: :hash: :hash: :hash: Hashtags for ThreatPad

ThreatPad is good at teaching core threat modeling concepts really quickly, but as soon as you have 10+ threats, it got really messy.

:hash: Hashtags to the rescue!

:hash: Hashtags can be introduced easily when you type threat descriptions. Any recurring concept may be a good candidate.

The InApp screeshots show some examples.

The birds eye view has a filter function. It can filter arbitrary substrings or :hash: hashtags.

Of course, you can also click tags inside descriptions and see what’s related.

The feature is designed to be discreet. If you don’t want to use it, you will only need to ignore 2 subtle hints and everything works like before.

Enjoy, type real quickly, explore! :slight_smile:

Here’s my example. Input will be shared with others.

If there’s any feedback, let me know.

I love the simplicity (but) I’d suggest more granularity in the rating scales. Low, medium, and high are great for first swag, but (for example) real world likelihood/frequeny can vary from:

  • as unlikely to occur as an extinction-level asteroid strike to
  • occurs thousands of times per minute/second

And real world “how bad” can vary from:

  • Trivial, walk it off to
  • Extinction level event

Cost is one of the easiest to quantify:

  • Pocket Change through
  • Decades of GDP

I’ve used logarithmic scales In the past to help with some pretty huge dynamic ranges and also made creative use of adverbs (eg. extremely unlikely) but I don’t have any better suggestions than those.

I do love what you’ve created

1 Like

Thanks for sharing your thoughts!

low / medium / high is an educational choice. It conveys the idea. It intentionally gets people upset, because it’s so fuzzy, so we can discuss how we would even tell. :winking_face_with_tongue:

When I designed it, I had a different approach with a full width red - yellow - green gradient bar. You could place dots wherever you liked, simulating the 0% - 100% from OTM.

I also had a collaborative thing where multiple opinions where possible.

People didn’t quite like it. :smiley: Would you?

I thought about having space settings so people can choose between different setups. But there’s a tradeoff between flexibility and simplicity.

What would you implement?

See also

for discussion/visuals about 1, 2, 3, 5, 10, 100 discretization steps. :smiley:

The biggest challenges with the HML scale isn’t the HH or LL combinations but is in the other two corners with extremes within the highs against extremes within the lows. In those cases, risk can be very overstated or very understated. This is easiest to see if one assigns a quantified range to each HML bucket then does the math on each of the 36 sub-corners of the 9x9 grid with the problem rearing its head when zero is multiplied by infinity.

It also rears up in larger grids but isn’t (in my experience) encountered as often when there are more than 9 total buckets, but it does occur.

For collaboration, I prefer a ranked voting scheme to prioritize the top N issues that are then rated and subsequent votes to break deadlocks/contention on risk ratings (if they occur.). This allows different thinking styles and areas of concern and degrees of expertise to participate equally. If the contention was huge, using the minority opinion approach in documentation can preserve group harmony and still keep momentum.

1 Like

Hi Hendrik,

I would recommend you to check out the book How to measure everything in cybersecurity risk. Specifically chapter 3. It introduces a simulation based approach to generate something called Loss Exceedance Curve.

It gives you the probability, that your losses would exceed a certain amount. Say your insurance value.

Doesn’t really matter how fine you tune the discreet scale it will stay an ordinal value scale with all the methodological issues: one assumes certain properties of the values, like additivity, or distance just because they are represented by numbers. However these are not valid.

With the simulations behind LEC you get exactly these properties: they can be added up, the LECs capture a sense of distance and able to express the extreme values.

@avish I find them very well suited for decision support.

1 Like

Sounds cool, @agota.daniel . I will check that out. Danke!

At first glance, that sounds a bit like Loss Expectancy from FAIR with their Loss Event Frequency and Loss Magnitude approach.

Connecting yours and the thoughts of @steve_gibbons this raises the question what to best teach newcomers and why:

Simple qualitative risk? Quantitative risk? Threat ranking? :thinking:

Threat ranking is supported a little bit: At least, you can have favorite and discarded threats. :star: :wastebasket:

Probably a question of belief… :rofl:

More thoughts?

Loss Exceedance Curve = Probability of Exceedance form insurance math, it is a curve.

FAIR’s Loss Event Frequency and the Loss Magnitude are asfaik point estimates. LEF gives you simply a probability of a loss event happening. Like 0.7. The same for Loss Magnitude: 70$.

A LEC tells you the relationship between arbitrary loss value and the associated probability of exceeding that value.

70% for exceeding 100k
20%0for exceeding 300k

2%=for exceeding 10 million

(Just as a curve not as a list)

There is no such thing as “simple qualitative risk” …simply fallacies, that are inherent to all qualitative models what are not obvious first and make you believe that you are doing something sound😜

Understand (teaching) quantitative risk is not hard: the basics can be communicated in a session about one hour.

Getting people unstuck on some methodological misunderstandings like there is an objectively measurable probability of an event and give there is NOT one still can and should create models as a means of decision support takes some getting used to.

BUT the tricky part is mechanising the calculations. It involves just the level of math most tool creators seem to be uncomfortable with and afaik there are not many reliably libraries offloading the effort and making some related takes simple.

Actually I am working on one, but it’s in rather alpha state

Kr.: Daniel

1 Like

FWIW, with a few caveats that I’ve already mentioned, I have no problem with qualitative risk analysis for the quick and dirty first level prioritization phase - triage, if you will. It helps avoid mind numbing discussion on the gory details of lower risk threats when that time (and energy) could be better spent on the risks that are more likely to kill the patient soonest and deserve the deeper dive.

1 Like