Skip to main content
Guide4 min read

Corporate agent compliance toolkit: the evidence package every auditor wants

What the SOC2, ISO 27001, and EU AI Act auditors actually want to see for an agent in production. The artefact list, the policy templates, and the controls mapping that closes audits in one cycle.

An auditor walks in and asks "show me how this agent is governed". If you cannot produce the package in 30 minutes, you are looking at a 6-month engagement. Here is the artefact list that closes most audits in one cycle.

The seven artefacts every auditor expects

1. Agent inventory

A versioned list of every production agent: purpose, data accessed, model, tools, owners. Updated on every change.

2. MCP tool inventory

Every server in production with: vendor, version, scope, lawful basis, reviewed-at. Mirror of your self-hosted registry.

3. Risk register

Top 10 risks per agent: hallucination, prompt injection, data leakage, model drift, etc. With assessed likelihood, impact, and mitigation.

4. DPIA / impact assessment

Per agent processing personal data. Covers data flows, retention, cross-border transfers. See GDPR-compliant agents.

5. Audit log evidence

Sample of audit-trail entries showing the schema, retention, immutability proof. See audit trails.

6. Eval results

Last quarter's eval pass rates per agent, with regression notes. See evaluation framework.

7. Incident log

Every agent-related incident in the last 12 months with root cause and remediation.

Mapping to specific frameworks

SOC2

Trust Service Criterion Agent control
CC6 (logical access) Agent SSO + scope enforcement
CC7 (system operations) Eval pipeline + monitoring
CC8 (change management) Prompt versioning, model rollout policy
A1 (availability) SLOs on agent uptime + tool latency
C1 (confidentiality) DLP + memory encryption
P1 (privacy) DPIA + consent management

ISO 27001 (Annex A)

Control Agent equivalent
A.5.7 (threat intelligence) Anomaly detection on tool calls
A.8.2 (privileged access) Workload identity + scope ACLs
A.8.16 (monitoring) Real-time agent monitoring
A.8.28 (secure coding) Pinned MCP versions, signed registry

EU AI Act (high-risk)

Articles 9, 10, 11, 12, 14, 15 map to: risk management, data governance, technical documentation, audit logs, human oversight, accuracy/robustness. The artefacts above cover all six. See the EU AI Act guide for specifics.

Policy templates worth having

A starter set:

  • AI Acceptable Use Policy — what employees can and cannot do with agents.
  • Model Risk Management Policy — how new models are evaluated before production.
  • Agent Change Management Policy — prompt and model rollout cadence.
  • Vendor AI Risk Policy — assessing AI subprocessors and MCP server vendors.
  • Incident Response (AI) Policy — escalation paths specific to agent incidents.

Each is 2–4 pages. Templates are widely available; tailor with your actual processes.

Evidence collection automation

Three weeks of work to automate the collection:

  • A scheduled job dumps the agent inventory, tool inventory, and audit log samples to a compliance bucket.
  • A dashboard renders eval results over time per agent.
  • A scripted DPIA refresh prompts owners quarterly.

Without automation, the artefacts go stale within a quarter.

What auditors will actually push back on

Three common findings:

  • No segregation between dev and prod prompts — anyone can change the production agent.
  • Tool inventory missing scopes — listing the tool but not what it can do.
  • Audit log without integrity — no hash chain, easy to tamper.
  • Incidents not tracked — "we have not had any" is not believable.

Pre-empt all four and the audit usually passes the first time.

The compliance owner role

Most teams above 100 employees end up creating an AI Compliance Lead (or load it onto an existing GRC role). Responsibilities:

  • Own the artefact set above.
  • Quarterly refresh of risk register and DPIAs.
  • Coordinate auditor engagements.
  • Liaison between engineering and legal/risk.

Without an owner, every audit becomes a fire drill.

Common mistakes

  • Treating compliance as a one-time exercise — it is a programme, not a project.
  • Engineer-only owned compliance — legal and risk must be in the loop.
  • Generic policies copied from the internet — auditors notice; tailor to actual operations.
  • No evidence rotation — last year's screenshots do not satisfy this year's audit.

Where this is heading

Two trends to expect by 2027: SOC2 / ISO add explicit AI control requirements, and shared compliance toolkits emerge as products (Vanta, Drata extending into AI). Building the artefacts manually now means you can swap into a tool later.

Loadout

Build your AI agent loadout

The directory of MCP servers and AI agents that actually work. Pick the right loadout for Slack, Postgres, GitHub, Figma and 20+ integrations — with install commands ready to paste into Claude Desktop, Cursor or your own stack.

© 2026 Loadout. Built on Angular 21 SSR.