Original post by @BrookShoenfield
@nznigel thanks for VEX. I hadn’t seen that. I definitely need to dig into how “exploitability” is arrived at: that could be very useful for existing vulnerabilities(1).
As so often happens with threat models, practitioners get stuck in what exists. The power of threat modelling is anticipating what may arrive in the future (i.e., weakness not necessarily exists today).
A threat model is not an analysis just like other kinds of scans and tests hunting for existing weaknesses. We need not confine ourselves to “vulnerabilities”, in fact, mustn’t!
VEX analysis can be applied to any weakness (if I understand correctly?). That’s important in threat modelling, a key component of risk. Great!
Still, and I try to make this point (over and over), threat models open an opportunity to gaze into an educated crystal ball so that we prepare for threats that will arise.
Take the classic example of a buffer overflow. Experience from the most mature software security practices demonstrates that no matter how hard we may try with languages like C/C++, memory vulnerabilities will eventually leak into release. the less mature a practice, there will be more leaks.
So even if as far as we know, a piece of C++ contains no issues today, we’ve got to assume that eventually someone will make a mistake that will be missed by our SAST+DAST+Fuzz regime. Threat modelling gives us a chance to prepare today for that which has some likelihood of showing up tomorrow and we believe will have impact. Anticipatory!
Coming back to VEX and SBOM: neither of them addresses the issue of which I wrote above (and in my books, yada, yada, blah, blah, blah - Brook’s once again blathering on). To wit: how components interact.
There might be no “vulnerability” in either component. Each is acting as designed. But their interaction introduces a weakness due to the interaction. Thus, I arrive at my “security contract”.the gateway’s security assumptions open an opportunity to exploit future issues in any component receiving from gateway (as per my example in my first comment).
Maybe I’m just too poor a writer to explain this problem clearly? I have certainly done my best both here and in Secrets and our DevOps focused book. sorry if I cannot convey the problem well enough.
- Vinay Bansal and my Just Good Enough Risk Rating (JGERR) can be used for “exploitability” rating. That’s partially what it addresses. I’ve since revised it multiple times so that the exploitability attribute hopefully gets clearer each revision. I will add that the most important factor, which we used very successfully at both Cisco & McAfee, is whether exploitation in context delivers anything useful for the attacker. that should (IMVHO) be the first assessment. We know that the vast majority of issues reported never get used “in the wild”. this might be due to complexity of exploitation. but often due to exploitation prerequisites that render exploitation moot. Like, requiring high privileges to exploit, when exploitation delivers those same privileges. Attackers don’t sit around overflowing buffers at high privilege, security researchers do. After gaining high privileges, attackers are having their way with the OS, prosecuting their goals, game over; pw0n.