The first thing my approach requires is that in-scope and out-of-scope ‘parts of the system’ (or “components”) are identified. Then for each in-scope component there needs to be (at least) 1 row in a dedicated authn/z table that captures:
the name of the (in-scope) component
an identity that makes requests to the component (this trips people up as a
lot of the time the natural focus is requests coming from the component)
there can be multiple rows, one for each identity that makes requests to the component
how the component authenticates the identity
how the component authorises the identity
Asking how authentication and authorisation work is surprisingly challenging for many teams in my experience. I think the problem is that there isn’t always a standard way to express what authn/z is being used and so it can be difficult to express it. To that end I wrote up detailed documentation including a flow chart of how to express authn/z information in various common scenarios.
Note how none of the information requested is actually asking the author to think about security, it’s just asking “how does this work”. I find capturing this information thoroughly tends to lead to discovering threats related to lack of authn/z (if there are any). More specific authn/z threats would be identified by someone from the security team on a use case basis (or alternatively threat model templates based on the use case can come pre-populated with relevant authn/z threats, and authors must provide their compensating controls).
Thank you, @Dave for sharing your approach. I’ll also check out your threatware.
I especially liked your
We have multiple facets in our approach:
We use unified threat actor names + preconditions as a subject of threat description sentences (“Server intruder reads data from database” / “Attacker with login credentials reads data via normal use of user interface.” / …). I found a similar thing in the threat grammar of the AWS Threat Composer tool.
For each such threat actor + precondition we ask “how did they get there” (“intrude the server” / “get the login credentials” / …) like in an attack tree exploration and reveal Spoofing / Elevation of Privilege threats that make up the stepping stones of such an attack.
We also have a similar thing like you, @Dave, when we enumerate Spoofing threats together with data flows, discussing mutual authentication.
It shows how different identities are protected by different kinds of authentication credentials and how a permission scheme maps between identities and access. And how a bad setup will lead to full impact (“AND”) from a single compromise (“OR”).
We can then discuss different mitigation approaches, like
stronger credentials
less exposure
fewer people granted access to a certain resource or action
fewer resources and actions for a single identity
detection and response controls
other harm reduction
shared responsibility and duties between development and operations
But that’s security education.
I’m curious: How do you include such things into your threat models?
@hewerlin AuthN and AuthZ are usually some of the more complex topics, and while there are common patters, they tend to be unique for every app. The way I do it is for every new project during the threat model:
We define the authentication mechanism(s) (e.g. session token for web application and JWT access token for the public API).
We define the authorization model, and for that we need to define:
The different entities interacting with the application (e.g. users, groups, API clients, … etc).
The different roles that could be assigned to these entities (e.g. System admin, Organization admin, … etc).
The different permissions that could be assigned to these entities (e.g. user permissions, group permissions, API OAuth scopes, … etc).
The resources that these entities can access and how permission is handled per resource (e.g. a blogging web app would have posts, comments, …etc as resources)
And based on this list we convert this authn/authz model into a list of test cases, e.g.:
A request with missing or invalid session token can’t access route1, route2, … etc.
A user that doesn’t have Sysadmin role can’t access route3 (sysadmin only route).
A user that is not organization admin and doesn’t have the read_posts permission assigned directly or to a group the user is member of can’t access route 4.
An API client that doesn’t have the scope post_comment can’t access API route 5.
…etc
Then once the application is on a testing environment we actually test all of these tests cases (e.g. using Burp Suite Autorize plugin), and we also make sure we have integration tests written to cover all these test cases.
NOTE: When we are defining the authorization model is a good point to discuss least privilege principle with the dev team and make sure we are giving the users the ability to fine-tune permissions to achieve that.
Thanks for sharing your approach, @Mohamed_AboElKheir , and thanks for joining the conversation!
In my ears, that sounds pretty sane and promising!
Looks like in this case, secure design is the best friend of threat modeling. We should probably start positive and define our AuthN/AuthZ / IAM / user/group/role/permission system. Then check if it works correctly or can be bypassed.
That also goes in the same direction like @Dave 's “it’s just asking “how does this work”.”
A good example, that sometimes mitigation-first ( ) may be better than threat-first ()…
I had some conversations like: “We should add a threat ‘Someone accesses the data unauthorized’ - Let’s add a ‘Permission system’ mitigation.”… Now I agree: It works smoother the other way round.