Team 16 - 4by4ce – Four By Force
Threat Modeling Hackathon 2025: TMC-Drive
Hello from Team 16 (Team 4by4ce) - a team of security experts in various topics!
We are thrilled to receive a Special Mention for our threat model during the 2025 awards!
Threat Model Scope
As an established vehicle manufacturer company, TMC-Drive develops besides the existing components additional new components for a new autonomous vehicle model. Listing the existing components:
- Hardware System with: Physical Structure, Embededded Devices
- Software System with: Vehicle Onboard Software with the Autonomous Driving Stack, Mobile App, Cloud Backend
Assuming the Autonomous Driving function to be the most critical part, the scope of this report is set to the stack and the embedded device where the stack runs and communicates with relevant sensors and actors.
The model analyzes two use cases:
- The Autonomous Driving stack running within the embedded device [UC-IC]
- The development and maintainance of the Autonomous Driving stack [UC-ML]
Used Methodologies and Frameworks
We used the following methodology and frameworks to threat model the defined use cases:
General Approach for Threat Model
- Model the system incl. actors, dependencies and dataflows
- Identify and Analyse the threats
- Determine countermeasures
- Validate the outcome
Frameworks
[UC-IC] The Autonomous Driving stack running within the embedded device:
- TARA (Threat Analysis and Risk Assessment) (from ISO/SAE 21434)
To get an type approval for the European Union, the product needs to comply with the UNECE R155. Therefore, we follow the TARA Process (Threat Analysis and Risk Assessment) described in the ISO21434. - ASRG Garage Threat Catalog
As the threat catalog, we use the Automotive Security Research Group Garage (ASRG Garage) Threat Catalog. The ASRG is community-driven non-profit initiative.
[UC-ML] The development and maintainance of the Autonomous Driving stack
- STRIDE
The approach to identify threats for the development of Autonomous Driving stack was based on the STRIDE Framework for identifying areas of concern. - MITRE ATLAS
For the dedicated ML threats the MITRE ATLAS and OWASP TOP10 for LLM Frameworks. MITRE ATLAS provides a structured framework for understanding, mapping, and simulating real-world adversarial techniques against AI/ML systems, including LLMs. It is modeled after the well-established MITRE ATT&CK, but specifically tailored for AI. - OWASP TOP10 for LLM
OWASP TOP10 for LLM is community-driven standard that identifies the most critical security vulnerabilities specific to LLMs and their applications, providing a security-first view of LLM misuse, abuse, and configuration weaknesses.
Our Threat Model Artefacts
Final Report - 2025 Hackathon Threat Modelling
Retrospective - 2025 Hackathon Threat Modelling