Claude Code Meetup · April 16, 2026 · Nuremberg

How I Accidentally Became #2 VDP* Researcher for Germany on HackerOne

HackerOne Germany VDP leaderboard showing fbettag

A candid story about building iteratively with Claude Code and ending up on the HackerOne VDP leaderboard for Germany.

* VDP = Vulnerability Disclosure Program

Outline

What we will cover in the next 25 minutes

~1 min Who am I
~8 min · Deep tech What I built and how it evolved
~8 min · Practical The build method & universal loop pattern
~5 min · Blueprint How to adapt this for your own projects
~5 min Q&A
First half is technical on purpose — architecture and implementation details up front. Second half focuses on the reusable patterns you can take home.
Who I am

Franz Bettag Polyglot engineer since 2001

25 years of consulting for enterprise to Fortune 500. On-prem, cloud, AI, security, and development — paid projects in 30+ languages before AI came around.

Bettag Systems logo
On-Prem & Cloud, AI & Security Engineering for Enterprise to Fortune 500
Most notable for the German market kleinanzeigen.de

Sole IT Engineer next to the CEO. Re-developed the platform from 2014–2019 serving 30M+ monthly users, then exited to eBay.

Wall Street HFT block-trade optimization

Tasked to fix a critical performance bottleneck. Found a $2M+ accounting error while rewriting the system. Cut monthly job runtime from 2+ hours to 5 minutes.

Fashion @ Scale Nordstrom / HauteLook

Rebuilt flash-sale push notification infra for the largest fashion retailer in the US. 10M+ concurrent pushes, delivery cut from 2 days to under 10 minutes.

Franz Bettag profile photo
Family lore

If you recognize my lastname, it's probably because my grandfather invented the BobbyCar and was the founder of BIG Spielwaren.

BIG Bobby Car — where it all started
Validated outcomes

The ranking followed real shipped findings

CVE Keycloak CVE-2026-1190
Critical fix 2x ClickHouse RBAC bypass
Subdomain takeover 3x Amazon VDP
Subdomain takeover OANDA VDP
Subdomain takeover → SIP / VoIP infrastructure takeover Opera
Valid finding Ruby on Rails
HackerOne profile stats: 7.00 Signal, 99th Percentile
Architecture

One canonical backend. Distributed execution.

Control plane

Central server

State, orchestration, findings, submissions, operator UI.

Session layer · 56 agents

Agents = identity + method

Full session prompts: exploit-web-attacking, recon-subdomain-discovery, audit-finding-verification, …

Tool layer · 200+ skills

Skills = composable building blocks

Reusable techniques: subdomain-takeover-detection, nuclei-scanning, playwright, sqlmap, …

Execution

4 pwn hosts + shared CLI

A shared `bounty-cli` standardizes every host run.

Infra

NixOS + flakes

Deterministic infra, reproducible deploys, auditable config.

Design rule

Validation-first state machine

Every risky phase emits an artifact that must pass a gate.

Built with Claude Code
Evolution

The shape changed as the constraints became obvious

01

Start with the smallest useful slice

Central server first. One source of truth.

02

MCP was useful, then expensive

Good for early iteration. Bad for token and context efficiency. Replaced with a shared Go CLI.

03

Push execution to workers

Parallel campaigns across architectures: Linux for web & Android, macOS for BinDiff fuzzing, iOS firmware diffing & Simulator app testing.

04

Separate agents from skills

Agents own the session identity. Skills are composable tools agents invoke. Clear contract between the two.

Every step was driven by a real constraint, not a design preference.
Why not MCP?

MCP burns tokens. A CLI + skills doesn't.

MCP sends full tool schemas on every request — context cost scales with tool count.
Every tool invocation round-trips through the model. Five tools = five inference calls.
A CLI runs locally with zero token overhead. Skills are injected only when needed.
Result: same coverage, 10–20x fewer tokens per campaign run.
MCP approach

Model → tool schema → inference → tool call → result → inference → …

High token cost per step
CLI + skills approach

Prompt → agent runs locally → skills execute in-process → results to server

Tokens only for reasoning
Case study

Opera SIP takeover: why recon automation matters

_sip._tcp.opera.com.   86400 IN SRV 0 0 5060 e1.viju.vc.
_sips._tcp.opera.com.  86400 IN SRV 0 0 5061 e1.viju.vc.
The rig flagged dangling SRV records pointing at e1.viju.vc — a domain that no longer existed.
RFC 3263 compliant SIP clients would resolve Opera's SRV records into whoever controls viju.vc.
I registered viju.vc manually, pointed e1.viju.vc at my infrastructure, and confirmed the takeover.
Responsible disclosure to Opera via Bugcrowd. Domain held as proof, not weaponized.
Legacy DNS edge case
High signal, low noise
Best found by broad automation
Context priming

Every session is a new hire

An A-grade engineer who knows every tool — but nothing about your project, your architecture, or your constraints.
Without priming, it will produce generic code that technically works but doesn't fit your system.
The question for every session: “How do I explain what I need in the fewest words while still conveying the architecture and the problem?”
CLAUDE.md

Project rules, coding conventions, architecture overview. Loaded automatically every session.

Memory

Learnings from past sessions. Mistakes, patterns, decisions the agent shouldn't repeat or forget.

Your prompt

The specific task. Short, precise, assumes the agent already knows the context from the layers above.

Build method

Claude Code was useful because I constrained it

01
One feature per session

Scope limits hallucination. Smaller context = better output.

02
Write the gate before the code (TDD)

“You are done when X passes.” Define success criteria upfront.

03
CLAUDE.md + memory for persistent context

CLAUDE.md documents project rules. Memory retains learnings across sessions.

04
Use swarm for parallelization

Split independent work across agents. Recombine only validated outputs.

We now have the ability to pair Test Driven Development with Rapid Prototyping, thanks to these models.
If you plan on shipping — never ship something where you can't yourself judge the output.
Reusable loop

This is the portable pattern: phase separation with gates

01

Discovery

Map the surface and emit concrete candidate artifacts.

02

Execution

Probe inside a bounded role with explicit success criteria.

03

Validation

Reject weak artifacts before they contaminate later phases.

04

Composition

Chain only validated pieces into a higher-value outcome.

05

Feedback

Update prompts, routing rules, and profiles from misses.

The point is not "more agents". It is fewer ambiguous handoffs.
The loop applied

Ship a CI/CD pipeline from zero to deployed

01

Discovery

“Investigate this repo. How is it built? What framework, what output? Create an initial git commit if there isn't one.”

02

Execution GitLab

“Build a GitLab CI/CD pipeline with AutoDevops for Kubernetes deployment. Look at similar projects for reference.”

02

Execution GitHub

“Set up GitHub Actions to build a Docker image and deploy to Kubernetes. Look at similar projects for reference.”

03

Validation

“Commit, push, and check the pipeline status. If it fails, read the logs and fix it. Repeat until the pipeline is green.”

04

Composition

Build passes → image pushed → deploy triggered → pods running. Each stage gates the next.

05

Feedback

curl https://your-domain.com returns 200. If not, iterate. You are done when the site is live.

Blueprint

How to copy this tomorrow

01
Discovery Pick one problem. Map the surface. Define the success artifact.
02
Execution Probe inside a bounded role with explicit success criteria.
03
Validation Reject weak artifacts before they contaminate later phases.
04
Composition Chain only validated outputs into a higher-value outcome.
05
Feedback Feed misses back into prompts, profiles, and routing.
Start here. Repeat until stable. Then scale.
One more example

GitLab v15 → v18 on Kubernetes. Overnight. Unattended.

01

Discovery

“Explore our current setup in read-only mode on this Kubernetes cluster, then do a full backup.”

02

Execution

“Research the best upgrade path. Do not skip intermediary steps. Verify dependencies and requirements for the new versions.”

03

Validation

“Start the upgrade procedure. You are done when all services that are currently running and up are successfully upgraded and responding.”

04

Composition

Each intermediary version (v15 → v16 → v17 → v18) is a validated artifact. PostgreSQL migrated in lockstep. Only proceed when the previous hop is healthy.

05

Feedback

Failures at any hop feed back into the next attempt. The agent retries with context, not from scratch.

Closing

Don’t start with one giant prompt. Start with a loop.

Security is the example. The architecture is the reusable asset.

Questions?