Virtual meetup summary TMC DACH 1st anniversary: Vibestagram - the vibe coding experiment

:de:
See

:gb:
Report from the TMC DACH Anniversary and “Vibestagram”: AI-powered coding experiment and security insights (TMC DACH)

Context
Around 25 people took part in the TMC DACH virtual meeting on 18 March 2026, moderated by Ron. Following a brief introduction to TMC, there were two exciting announcements:

  1. TMC DACH is celebrating its first birthday!
  2. And as a gift, we have the release of

We then moved straight on to our guest speaker, Jan. He had some exciting news to share about a workshop held at iteratec in the summer of 2025.

The task: in one hour, small teams were to develop a fully functional clone of Instagram with AI support – using whichever model they chose. After the development phase, in the second hour of the workshop, the groups exchanged their code with one another to assess vulnerabilities and quality aspects. The workshop participants included developers, architects and pentesters, amongst others.
Observations from the exercise
The range of results was wide:

  • Some teams focused on visual design (design, UI/UX), whilst others created fully functional web applications with with login, password reset and upload functions.
  • Several groups were still discussing architectural issues throughout the entire hour – others had already built clones that were almost complete.
  • Typical minor web errors (a clickable logout button redirecting to ‘Not found’, a retro layout) were common, but there were also positive surprises: passwords in the database were hashed and salted.
  • At the same time, there were critical vulnerabilities: unprotected upload path access, directly accessible database initialisation, missing CSRF tokens, downloadable SQLite files, and in some cases uploadable webshells.
  • A notable absence: no SQL injection was found – instead, weaknesses at the conceptual and integration levels.

Jan summed it up: “A speedrun through the OWASP Top 10 – just one level lower than expected.”

Related images:

After the exchange phase, the groups evaluated the identified issues and compared them with their initial expectations – in most cases, the conclusion was more positive than anticipated.

Discussion: Experiences with AI and model comparisons
In the subsequent discussion (lasting almost an hour), participants described very different experiences with AI assistance in development:

  • Some used Claude for code generation and Gemini for reviews – with the feedback that AI models tend not to criticise the user much, but do tend to disparage one another (Gemini is said to be ‘more free-spirited’ and will sometimes throw accusations at the person making the request, such as ‘even a seven-year-old could do that!’).
  • Jan reported significant differences between models – what is generated today differs noticeably from results from just a few months ago.
  • One participant emphasised that precise technical descriptions are crucial – vague prompts quickly lead to failure.
  • Ron pointed out that many models struggle to admit ignorance – which can lead to architectural weaknesses.
  • Jan added that targeted prompts (“check specifically for XSS”, “check specifically for SQL injection”) yield significantly better results than general, open-ended security questions.
  • Another participant described the approach of using specialised agents for clearly defined subtasks (e.g. testing, review, specification comparison). This improves precision but also highlights integration limitations.

Overarching reflections
The discussion then turned to the role of AI in the software lifecycle:

  • Positive: According to Jan, the review of penetration test reports and static code analysis now work very reliably.
  • Underutilisation: Dependency checks and runtime conflicts are not currently being checked using AI among the participants.
  • Critical: Several voices warned against the loss of ‘professional pride’ – those who let AI make decisions too often would lose basic skills.
  • Another participant raised the issue of digital sovereignty: dependence on proprietary US models jeopardises European control over development processes in the long term.
  • The comparison with doctors who are retraining in image diagnosis without AI was used as an analogy for software training. No one in the audience had yet actively introduced such a practice.
  • There was agreement that European models – such as Aleph Alpha – could catch up in the future, but are currently still lagging behind.

Conclusion
The experiment showed that AI-assisted development can produce surprisingly functional applications in a short time, yet safety-critical design flaws remain. Quality depends heavily on the model, the precision of the prompts, and the division of roles between humans and AI.
At the same time, it became clear that organisational and ethical issues – dependency, maintaining competence, transparency – are increasingly coming to the fore and go beyond mere technical security.
It remains to be seen how this will develop as models improve…

Next event:
As usual, we’ve already scheduled the next event in April:

luma.com

:hammer_and_wrench: Werkzeugschau: AttackTree.online trifft KI-Agenten - TMC DACH · Zoom · Luma

:germany: :austria: :switzerland: [lang:de] Triff und erprobe AttackTree.online ! Mit Christian Schneider lernen und erproben wir das Threat Modeling Werkzeug AttackTree.online -…

This is part of a new type of event: We want to show which threat modelig tools exist out there and try them together with you!

1 Like