Hackathon design

I’ve written up some thoughts on the Hackathon + judging criteria, Shostack + Associates > Shostack + Friends Blog > A Different Hackathon Design?

Would love to hear what people think about either the thing I see as going wrong, or what to do about it. :slight_smile:

Hi @adamshostack + others,

I see your point and I like where this is heading. It’s a bit like asking “What makes a good threat model [in a learning scenario]?”

First thoughts on your criteria:

  • Originality: :+1:
  • Comprehensability: :+1:
  • Time to review: Is a bit like saying: Should be short and sweet. Rewards discarding all the low risk threats, while documenting that they were seen and discarded has value.
  • Unique threats found (not in any other analysis): Rewards fancy / crazy low-likelihood threats that are not really an issue
  • Fraction of content that’s “actionable”: Not sure if I understand. Can you explain? Also: Are “no action” findings actionable? :wink:

Criteria… How about

  • Relevance of threats
  • Irrelevant threats discarded or judged “no action” early on
  • Adequateness of mitigations (do mitigations really tame associated threats)
  • Feasibility of mitigations (the solution should take into account that not infinite effort can be made to tame all kind of threats and make sensible decisions) (would require some input on what is feasible)

What do you think?

@adamshostack I’m thrilled to see that you’ve brought up this topic.

@hewerlin Regarding your comment about “Unique threats found (not in any other analysis)” as a criterion: I didn’t interpret this as a proposal to reward obscure / low-likelihood threats that are not really an issue. I see this as a useful criterion for confirming what threat modeling adds to the secure SDLC that other types of analyses would miss, which confirms the purpose of hosting threat modeling hackathons. How might we be able to rephrase this to make sure that participants read this criterion the way it’s intended to be read?

1 Like

Probably “unique relevant threats found” would do. But then the criteria has overlaps if there’s already another relevance criteria.

Also: how to judge threat relevance? Follow the team’s risk assessment? Redo? Follow judge’s “this threat sounds relevant” gut feeling? :thinking: Which threats on the other hand are irrelevant, unlikely, fancy/crazy? Probably not so obvious. :smile:

Merry christmas everyone! :innocent:

I have begun to use a mental model of threat modelling as:

  1. Deciding a security model to use to evaluate the target system. This could be STRIDE, or LINDDUN, or ATT&CK or whatever. This is basically the security properties of the system that you want to determine if they are present or there are gaps i.e. threats
  2. Creating a model of the system (“What are we building”) that includes sufficient detail as to be able to evaluate the security properties in the security model (from 1).

I think this mental model could be helpful in deciding what threats should and shouldn’t be in a threat model. I think it becomes quite straight-forward that you want to include all threats that relate to your security model, regardless (within reason) if they are “unique” or “actionable” or whatever other criteria, as by definition they are threats that are relevant (because they concern security properties that you have defined as relevant).

Of course the challenge then becomes one of choosing the correct security model to use. This choice needs to depend on a variety of factors but probably most importantly on the business/industry risks that are the most important. Just choosing all the security properties you can think of runs the risk of including risks that aren’t relevant, but also each security property is a multiplier in terms of number of threats to consider and so impacts the size and ability to consume or verify a threat model.

Bringing this back to the hackathon (but really threat modelling in general as well), I can see value in teams providing clarity on why they choose certain security properties, what makes them relevant, what trade-offs they are prepared to make and why (for instance I often ignore DoS threats when talking to software teams as it’s usually handled by the infra team).

I’ll caveat this by saying that just because I think all relevant threats (to the chosen security model) should be included, doesn’t mean I think that providing them as an exhaustive list is the best form of presentation of the threats, I think there are likely numerous better presentation methods that would be ideal to explore in a hackathon (I, for instance, lean into aggregating threats when presenting findings).

More thoughts on “Unique threats found”:

  • It would be a good property if a threat model candidate can be judged on it’s own. Don’t know how many teams apply, but given different teams and output formats, uniqueness may be computationally hard to evaluate. :wink:
  • I would argue that a threat model should find the highly relevant threats, even if they are “boring”. If we want to judge misses and blind spots, there could be some reference of threats that should be found. That goes contrary to unique + fancy.