🌟 Most awesome risk assessment style?

This post
Meet ThreatPad! - Techniques & Tooling - Threat Modeling Connect Forum (Post #13 - Post #20)
started an interesting discussion about what is the best risk assessment style.

@steve_gibbons and @agota.daniel shared their valuable insights.

Some of the candidates:

  • Qualitative risk assessment, (likelihood, impact) ∈ {LOW, MEDIUM, HIGH}²
  • Any of it’s friends with different discretization - see Likelihood / Impact
  • Threat Ranking
  • Quantitative methods like Loss Exceedance Curve
  • Quantitative methods like Loss Expectancy from FAIR with their Loss Event Frequency and Loss Magnitude approach.

Also…

  • What’s the thing that beginners should learn?
  • What’s the thing that practitioners should actually use?

Why?

What are some of the :+1: and :-1: s?

What is your take on this? Happy to have people join the discussion… :slight_smile:

1 Like

Let’s discuss this by example!

TMC DACH chapter will meet on Tuesday and threat model online dating like Tinder. For some participants, it will be their first contact with TM. :smiling_face_with_three_hearts:

We’re probably not going to build the next more secure Tinder (or are we? :wink:), so we don’t have to bother too much about risk and feasibility. But let’s ignore that for a second and pretend we were onboarding Tinder devs.

Quantitative risk

Wrapping my head around quantitative risk assessment, based on FAIR Loss Expectancy with their Loss Event Frequency and Loss Magnitude approach:

(I’m just getting started. Eager to learn if there’s pros around! :man_mage:)

We should estimate Loss Magnitude for some threat events (all estimated in :coin: - from our perspective):

  • single public profile leaked
  • how many profiles are lost in a scraping attack? Multiply… Estimate extra penalty.
  • single user’s “private” (haha, just kidding) messages leaked
  • single user experiencing awkward date with a liar - estimated in :coin: - from our perspective
  • single user’s hours of life and emotions wasted talking with a :robot: - estimated in :coin: from our perspective (btw. Is it our bot? Or from an attacker? Do they notice?)
  • single person harassed or insulted
  • single person experiencing the terrors of sextortion
  • single person raped
  • single user losing 1000€ in a love scam (sorry for you bro, we will calculate with 0€ because that’s not our money and estimate secondary loss of you getting really pissed, telling everyone, damaging our reputation and us losing business :face_with_peeking_eye:)
  • How popular is our service going to get? So what is the factor when we lose ALL the data that we need to apply to “single user’s private messages leaked” and such? + additional penality (Read about Ashley Madison data breach - Wikipedia ! :scream: :scream: :scream: )

Consider Loss Event Frequency also. […]

Then what are different things we can do?
Developers, you guys rule at estimates! How much :coin: will that cost? Multiply by 2 because you might as well develop fancy features, so we’re paying opportunity costs.

Let’s do the math. What’s the best option? Does different security pay off? Now we know what’s the right thing.

:chequered_flag:

Some of my questions:

  • Do I get this right?
  • Will that lead to the right things done?
  • Are financial incentives and penalties through legislation and other social / economic processes really “designed” in a way that this inspires the right behavior, assuming everyone is maximizing :money_bag:?
  • Does that even work? Or do engineers suck at money :money_bag: and time :hourglass_done: estimates?
  • Are we threat modeling, figuring out what is the right thing to do? Or are we spending 90% of our time estimating money?
  • Seems like Loss Expectancy grows linear with #users, while development effort is constant for a particular mitigation. Does that mean huge services will naturally have awesome security and small services are encouraged to suck at security?

Happy to hear some thoughts.

Aggregating risks in an understandable and meaningful way is challenging in general and is especially challenging with qualitative assessments. Heat maps and similar work to a certain extent, but aren’t especially useful for multi-category reporting, comparison, ranking and prioritization and the work that would go into developing an abstract risk score that would be useful for those purposes is probably better spent on actual quantified assessments using concrete units. On the other end of complexity, aggregating a quantity of curves requires a bit more computation (not the end of the world) and presents other challenges for visualization and easy understanding in addition to the extra burden on the assessors.

1 Like

Yes, I agree.

Another problem with qualitative is (surprise:) they don’t handle the quantities correctly :smiley:

Losing 1000000 database entries or losing 10000 database entries? Both HIGH impact!

If those are two issues, I’d treat them the same. :-1:
If those are the same issue after a certain mitigation, harm reduction is not rewarded (and not done?). :-1:

What are some of your thoughts about my concerns with quantitative - see questions at the end of post 2? @steve_gibbons

As much as it seems like I’ve been dismissing qualitative assessment, it still has its place:

  1. for beginners (a 3x3 HML grid is easy to grasp)
  2. for first-draft estimations

Quantitative assessment requires more data, training, knowledge, math, compute, interest, time, discipline, and rigor but the benefits ARE there. Engineers can appreciate solid engineering and can also appreciate back-of-the-envelope designs that lead to prototypes that lead to solid engineering, so the trick is to lead them through the whole story one chapter at a time, starting at the beginning if necessary and appreciating progress along the way…