Before your AI agent reads real business documents, test the intake layer.
A practical checklist for German SMEs that want to put AI agents near emails, PDFs, CRM notes and support tickets without creating hidden input, permission or compliance risk.
I write about AI agents in English because most operator and security discussions happen there. KoBra Dataworks works with German SMEs, so implementation, compliance, responsibilities and vendor coordination are intentionally mapped to German business reality.
English for reach. German/DACH context for revenue, compliance and rollout.
If you run a German SME and want to deploy AI agents near inboxes, CRMs, finance workflows or customer communication, use this checklist before the first production rollout.
The topic is international. The implementation has to survive German processes: Datenschutz, Verantwortlichkeiten, Freigaben, Lieferantenkoordination and audit trails.
The problem
A human sees a normal email. The model may receive hidden Unicode instructions that never appear in the UI.
The risk
Agents can update CRMs, draft replies, route tickets or prepare imports. Prompt injection becomes workflow risk.
The fix
Treat external documents as untrusted input. Sanitize, limit tools, log decisions and show approvals clearly.
What is inside the checklist?
The PDF is a pre-flight audit for teams that want useful AI agents without handing them unsafe inputs or oversized tools.
Quick self-test
Can your system detect invisible instructions before the model sees them?
If the answer is no, the first security layer is missing. Start there before giving an agent more tools, more autonomy or access to business-critical workflows.