Back in September, there was a fascinating Propublica article, Microsoft Chose Profit Over Security. It includes a link to Microsoft Security Servicing Criteria for Windows, which uses the term “security boundary” where I’d normally say “trust boundary.” I’m somewhat certain that I inherited the phrase without giving it a lot of thought. But when I train, some people find the phrase mystifying.
So is there a meaningful difference where the terms “trust” and “security” differ?
I honestly have never heard the term security boundary before this post. I have only heard of trust boundaries. From reading you blog post I am not sure if the nuisance of distinguishing the 2 is worth it.
The goal of threat modeling is to clearly communicate security design concerns. Every term/jargon we use is like a tax on clear communication, so every usage of it needs to be significant enough impact to justify the “tax”.
Keep it simple if we can and reduce the communication tax.
Sweeping generalisations and personal opinions ahead - read on if you dare!
I’m of the opinion that trust boundaries get used in the sense of “all these parts of the system are at the same level of trust, that means I can draw a box around them and we can ignore threats between them” - really?. Trust boundaries seem to be used to make people’s favourite threat modelling tools produce less threats, or at least noisy threats someone doesn’t think will be relevant. So I would say say trust boundaries are commonly more defined by the things inside them than they are the things outside them.
Security boundary implies to me choke point you would defend with a security policy. I would say it’s more defined by what it tries to keep out or allow through perhaps.
Personally, because neither trust nor security boundaries are particularly well defined (at least in ways I think are useful), I don’t use either when talking to teams during a threat model, and in that sense I agree with @Mike.Novack that since I don’t see it adding value, and threat modelling seems to work just fine without them.
I tend to agree. In multiple threat modelling effort I’ve done in the past, trust boundaries were not needed, and when other people tried to introduce them we soon found that the definitions collapse.
[That’s why there’s no trust boundary concept in TRADES.]
I agree that a Trust Boundary should have an active definition similar to the concept of a Secure Enclave used in PCI and Microsoft’s Government Community Cloud (GCC) High offering. I personally don’t see a lot of value in the passive definition of “all devices within this region are at the same level of trust-worthiness…”
I do think there is value of using basic trust zones like internet, 3rd party with a relationship, cloud environment, on prem environment etc. You should try and secure everywhere regardless of where it is via zero-trust mindset, but sometimes you need to prioritize your efforts. I find focusing on when things communicate between these trust boundaries have the highest RIO to focus on. Why? It is because that is when different users, companies and technology stacks are likely to interact, which in turn is the most likely place for things to go wrong.
I intuitively tend to think, that a trust boundary describes how things should be according to our current mental model of the system model. What is still not necessarily describing an ideal state of things.
The security boundary on the other hand is drawn where there are actually active controls in place. Irrespective of their effectiveness.
So in my hand the trust zone is like the OSI model of networking it serves maybe mainly to make a point, to educate.
Once that point is made and we move to discuss (and document) a concrete systems it rarely if ever occurs, that a trust boundary adds value to the model.
To identify missing controls for example I tend to find an analysis of the concrete controls in place between the source and sink in data flows more useful (as in static analysis of code).
Thus, I tend to agree with Avish, that you can leave the concept of trust boundaries completely out of a model (given you have a more precise one in place; in my case source-sink-mitigation).
Most the work I do with threat modeling is about educating my tech peers, so they are empowered to do it without me there. That is why I find a simpler definition of trust boundary to be helpful.
I will take them doing threat modeling by themselves and not as much detail, compared to with me moderating and more detail.
Application security hinges on what people do when security is not directly involved because the work required is to decentralized. There are not enough of us typically in an organization to be directly involved (and I am not saying that there should be as that would be extremely expensive to do so).
Trust boundaries are natural places to scope-limit a review/evaluation/audit/asessment. The implication is that further review/evaluation/audit/assessment of the other side of that boundary is also important to fully understanding the risks to the system under study. They are also good places to summarize the output of those evaluations for consumption by other evaluations.
I think you are on to something, but I also think that reducing the number of threats can be a good thing to avoid overloading teams, processes, and documents discussing the same threats over and over again. So, as you say, I use the trust boundaries to ignore threats inside them but because they are addressed elsewhere. Maybe on a different threat model or maybe by a 3rd party, I cannot control or influence.
So I think the word “can” is doing a lot of work here. You certainly can leave anything out of a model — it’s a model. But including boundaries allows you to get to questions like “what can go wrong at this boundary” and “what are we going to do about that?”
Can you say what definition you were using? I find the ones about principals interacting can be require work to understand, but I’ve never had them collapse on me.