What’s your definition of ‘trust boundary’?

Hey friend! This week in the community, we want to talk about trust boundary and its definition.

According to SOTM 2024-2025, 72% of orgs use trust boundaries in their modeling. The most common definition: “a grouping of things that share the same level of trust / all trust each other.”

64% of people use that definition of a trust boundary, so that argument is finally settled, right? Right? (/throws grenade; /runs for cover)


This post is part of our new weekly ‘Peer Perspectives’ series! Each week, we’ll explore a new threat modeling topic and open it up for community discussion. For the coming weeks ahead, we’ll discuss one key finding from the newly-released State of Threat Modeling Report 2024-2025 each week.

I like the FloodFill in Paintbrush analogy:

I drew a not so perfect circle:

When I flood fill red “outside”, inside gets red, too.

Only when I close the circle (=> mitigations)

I get the desired inside / outside separation when I flood fill outside.

Things inside the same trust zone, like with flood fill and neighbor pixels with the same color, have the “When I am here, I can easily get there” property.

With trust zones there must be a notion of inside and outside. And a trust zone only makes sense if there’s something in between that really enforces that border. “If I am here, I can NOT easily get there”.

With my home, it’s only because of the door that I can speak about inside (home) and outside (big wide world). When the door is open or unlocked, yes, there’s still inside and outside on a conceptual level, but not in terms of actual protection.

To use your home and door analogy, it’s easy to establish a trust boundary because you know the ways in which an adversary can get in (the door). However, when an adversary can just “appear” inside your house without using a door, the trust boundary is redefined, and becomes moot. When teams threat model without experience, they assume the ways adversaries will get in will be through conventional ways (doors, windows, SSH remote access), but throw a reverse shell in there and the adversary now “appears” inside the house, the trust boundary becomes the weakness.

1 Like

I tend to view this from a functional standpoint. A “Trust Boundary” is where you have to reestablish your level of trust (usually in the form of identity) in order to cross it. For example, on my iPad/Computer/Other Personal device, once I login the trust boundary is most of the applications on the device. Some functions/applications have a finer trust boundary (e.g., brokerage app), in order to cross that boundary, I have to authenticate again, perhaps with additional devices for 2FA.

That sounds clever.

Isn’t this incomplete TM / blind spot?

I still need to secure the house and find out how attackers keep beaming in.

What is your alternative suggestion?

Hi folks,

This follows from the definition of a trust region itself, my definition of trust region is:

a trust region is a region where there is an expectation of a security policy being fulfilled

It’s a terse, abstract definition but it fits most cases I encounter but it basically says that things inside the region are trusted for some purpose (as specified by policy)

As an aside it doesn’t make sense to treat a trust boundary as anything other than a border - a dividing line - between regions. It also doesn’t make sense to introduce new terms like “"privilege” into the definition of a trust boundary/region because now you have to define privilege and demonstrates how it links back to trust.

The definition above also means that trust regions can overlap because they are trust regions for a policy.

1/ For example if the remote call centre worker can see your social security number they are trusted not to disclose it (the policy) and that particular trust region extends from the backend to include all call center workers. This would obviously be an example of a poorly defined trust region but the model is descriptive.

2/ Most of the times we draw trust regions at the abstraction level of the model - e.g. network zones, or rooms in a house.

3/ Inside the DMZ, different entities may trust that all request have been already been authenticated at the edge and transited via an in-bound API gateway. This may or may not be a sensible thing to trust.

My definition also means that entities inside a trust region are trusted not to violate the policy, but could typically do so, possibly undetected.

For example, trusted apps running inside ARM Trust Zone (which is a distinct privileged and trusted execution environment) are trusted not to mess with physical RAM in areas they have been told not to, even though they could.

Looking at trust regions at layer 3/4 in the network stack or say, executing entities in a compute system (e.g. operating system processes vs userland processes) - trust regions will often match privilege levels because this is often the policy we are most interested in.