What's your biggest challenge with threat modeling right now?

Hi TMC Community!

I want to open the floor for a discussion on the challenges we face in threat modeling and related areas. Whether you’re a seasoned practitioner or just getting started, chances are you’re not alone. You might find someone who’s been through the same challenge or is eager to figure it out with you!

What challenges are you currently facing?

  • Are there specific technical hurdles you’re encountering?
  • Is there a process that seems to be causing bottlenecks?
  • Do you struggle with stakeholder buy-in or communication?
  • Any tools you wish worked better for you?

Sharing your challenges helps us all figure things out together. Plus, we want to use your input to line up awesome speakers and plan content that actually resonates with you.

Drop your thoughts below, and let’s tackle these hurdles as a community. Looking forward to hearing from you!

Right now it is extending threat modeling to the deployment and use of AI. I find in many conversations I am having that there is some confusion over the components that make up an AI solution – from platform infrastructure, applications, to models, and data – and how to think about the threats to each component. When it then moves to more agentic and RAG based implementations, it gets even more difficult. I am looking for any good ideas about how to help approach this threat modeling work in a way that is easier to consume. I am also looking for examples where people have used MITRE ATLAS or other such frameworks to inform their threat vocabulary.

2 Likes

@billreid I haven’t had to threat model a full deployment of an AI system myself, but I can share how I would approach a large complicated system, and that is to always try to break it down into smaller parts, hopefully to parts where there is a single team that owns each part, so they can speak authoritatively about their part. I also try and remain clear on what properties of the system I want to model, usually it’s a focus on access control. AI comes with it’s own set of “business logic threats” e.g. hallucination, jail breaking, etc. but there should be good resources online that cover those.

I found this Microsoft course that might be useful if there are some basics you need to cover, otherwise this TL;DR sec post covers all the talks from Defcon and Black Hat on AI, which might be a guide to some more cutting edge AI security issues. This NCC post Analyzing AI Application Threat Models and this post on Threat Modelling Enterprise AI Search may also be useful.

2 Likes

Hey folks! I do a lot of threat modeling of AI systems in my current job. Most of my work deals with Microsoft Azure. In my tech stack, being a security engineer has its challenges, with how folks are building AI applications however, Microsoft is fairly good at putting out MSCB security baselines for Azure services (such as Azure OpenAI + Azure Machine Learning). RAG patterns also come with their own sets of challenges and solutions, however, once you get a better understanding on how they are built, i can assure you they really arent too bad to threat model. OWASP Top 10 LLM is also very handy when threat modeling the AI capabilities, but dont neglect the environment as welll! I would very much employ you to go deploy your own endpoint and create a RAG solution (fairly easy to do this in Azure), IMO the best way to learn how to threat model is to learn how to be a developer as well.

4 Likes

Hi Everyone,

Here are some observations in threat modeling from my first year leading sessions:

  • I think a lot of developers prefer determinism and threat modeling is something that does not feel deterministic to them.
  • Developers sometimes hold on to a more exact view of the architectural components even if not related to trust boundaries and that can make the initial analysis difficult.
  • I have not found AI that useful yet in trying to create that initial threat list. I’m not sure if the models have had enough training yet with enough threat data to produce anything meaningful?
1 Like

These are some really interesting observations, and I can relate to them a lot. In my experience:

  • with regards to determinism, I’m not sure threat modelling will ever be a deterministic process, but I’ve had some success in leveraging consistency to drive the creation of the model of the system, see this talk. The other benefit being it gives the Security team more confidence that the system has been captured correctly, and hence that relevant threats are less likely to be missed.
  • with regards trust boundaries, I’ve been threat modelling for a while now and I don’t leverage the concept of trust boundaries at all, as I’ve found it more a hinderance than a help. It might be interesting for you to consider what your process would be like if you didn’t use trust boundaries.
  • I’m with you on AI, it hasn’t proved that practical … yet (I guess I’m still hopeful).

Agree with @Dave. Threat modeling is not deterministic by nature, but that is the same as architecture in general.

It sounds like your developers/tech area need a bit more architecture level thing. Threat modeling is about identifying design flaws (high-level), and developers are looking for deterministic results like what should I do in my code. That is implementation flaws. Things like SAST, DAST, etc cover that from a security perspective.

I also prefer to not use security language or even say threat modeling. Why? Architecture design and secure design should not be different things in my opinion. By using different language for security design concerns compared to everything else they are considering business functionality, performance, user experience, etc this creates an us vs them dynamic. This is counterproductive as I want technology to fully take ownership of the security of their applications.

It is good we have developed practices we have with threat modeling, but we should meet technology where they are at. Let’s learn their language and adopt messages to work there existing mental frameworks, instead of expecting them to learn our mental frameworks.

1 Like

On leveraging AI to scale threat modeling: I like to describe it as a car that goes 0-60 in 3 seconds (type a prompt, provide a diagram as input, and voila, you have “something”), 60-80 in 10 minutes (needs a lot of prompt engineering and access to internal information) and nearly impossible to go beyond 80 (you will have to choose between creativity and hallucination).

What may work, though, is to solve a smaller problem. We’ve had reasonable success in generating security requirements from design documents. While it’s hard to mimic the quality of a seasoned security architect, it helps improve coverage and provide baseline quality.

To generalize, LLMs are good at solving specific problems (e.g., extracting context from unstructured data). LLMs can help solve a subset of threat modeling problems if we can leverage that capability.

-Sandesh

1 Like

@sandesh lol, I really love your car analogy for AI! I think it sums things up really nicely :slight_smile:

It is not really a new problem, but I still see it as the most relevant. A lot of the threat modeling jargon, and this generally true for the wider security industry, creates an us (cybersecurity) vs them (IT and business) dynamic. To clarify we need our security jargon, so we as practitioners can quickly discuss together without constantly defining things, but I would encourage using simpler/jargon free language when talking to non-cybersecurity folks. Ideally if you can use their jargon even better. This creates a dynamic of we instead of us vs them. This creates an adversarial rather partnership culture. You can give the most articulate, logical, data driven solution to a problem, but if you are perceived as an enemy/other no one will listen. After trust is built up then start educating them on cybersecurity jargon, so they can become a cybersecurity practitioner in their own way.

I recently began working on threat modeling for my organization and have encountered a few key challenges so far:

  1. Collaborating with third-party developers to integrate threat modeling early in the application development process, especially when it is not part of their existing practices.
  2. Ensuring that the findings from the threat modeling process are addressed during development. While security teams can identify potential threats, successful mitigation often depends on the development team’s effort and understanding to implement the necessary changes.
1 Like