Skip to main content
Guide8 min read

EU AI Act and MCP: a practical compliance checklist for agent builders

EU AI Act high-risk obligations bite from August 2026 and MCP agents land in scope. The practical compliance checklist: tiering, DPIA, audit logs, tool inventory, oversight.

The EU AI Act is in force, the high-risk obligations bite from August 2026, and MCP-based agents land squarely in scope. This guide is the practical version: what specifically applies to your agent, what you need to document, and what you need to change in code.

Does the EU AI Act apply to your agent?

Three triggers, any one is enough:

  • You serve users in the EU (territorial scope is generous).
  • You use a general-purpose AI model (most LLMs qualify).
  • Your agent operates in a high-risk domain (employment, credit, healthcare, education, critical infrastructure, biometrics, law enforcement).

If any apply, the obligations below are on you, not on the model vendor.

The four risk tiers

TierExamplesObligations
ProhibitedSocial scoring, manipulative agentsDo not ship.
High-riskCV screening, credit scoring, medical triageFull conformity assessment.
Limited-riskChatbots, deepfakesTransparency obligations.
Minimal-riskSpam filters, recommendationVoluntary codes of conduct.

Most agentic products land in limited-risk by default and high-risk if they touch a protected domain.

What MCP changes

The Act regulates AI systems, not models. Your MCP-equipped agent is a system. The protocol introduces three compliance surfaces that bare LLM apps do not have:

  1. Tool inventory — every MCP server is a capability the regulator may ask about.
  2. Data flows — MCP tool calls move personal data; you owe a record.
  3. Auditability — agent decisions must be reconstructable from logs.

The compliance checklist

1. Maintain a tool inventory

For every MCP server in your production config, document: vendor, version pin, scope, data accessed, purpose. Update on every config change. Treat as a controlled artefact (PR review).

2. Run a data protection impact assessment (DPIA)

Mandatory for high-risk and recommended for limited-risk. Cover: what data the agent reads, what it stores, who can access traces, retention windows, deletion process.

3. Log every tool call

Append-only audit log: who triggered it, which agent, which tool, with what arguments, what result, what model decision followed. Retain for the period your DPIA specifies (typically 12-24 months).

4. Implement human oversight

For high-risk operations, the model proposes — a human decides. Do not auto-execute side effects (sends, payments, decisions about persons) without an explicit user confirmation.

5. Disclose AI involvement

Limited-risk transparency obligation. If a user is talking to your agent, they must know. A "powered by AI" badge in the UI is the bare minimum.

6. Document training and tuning

If you fine-tuned a model or use RAG over your own data, document the dataset (sources, dates, filtering). The Act’s obligations under Article 10 cascade down from your model vendor — but only for what they trained. Anything you added is yours.

7. Risk management system

Continuous, not one-shot. Document known risks (hallucination, prompt injection, biased outputs), monitoring for them, and your mitigation plan when they materialise.

8. Conformity assessment for high-risk

Before market entry, an internal or notified-body assessment. Documents the above into a formal package. Keep available for 10 years post-market.

What you need in the codebase

Concrete code-level changes that map to the obligations:

  • An audit_log table for every tool call (see our analytics guide — same shape).
  • A requireApproval wrapper around tools flagged as side-effecting.
  • A "you are talking to an AI" banner in every UI surface.
  • A user-facing /data endpoint exposing what the agent stores about them and a delete button.
  • A scope-pinned mcp.json in version control with a CODEOWNERS-controlled approval flow.

What does NOT change

Reassurances:

  • The Act does not ban open-source LLMs (heavy obligations only kick in past compute thresholds).
  • Foundation model providers carry most of the Article 53 obligations, not you.
  • Internal-only agents face lighter requirements than public-facing ones.

Penalties to take seriously

Fines mirror the GDPR structure: up to 35M euro or 7% of global turnover for prohibited-AI breaches, up to 15M euro or 3% for high-risk obligation breaches. Compliance is cheaper.

Practical next steps

  1. Tier your agent (prohibited / high-risk / limited / minimal).
  2. Inventory your MCP servers.
  3. Stand up the audit log if it is not there yet.
  4. Write the DPIA. The template from your country DPA is fine.
  5. Surface AI involvement in the UI.

Then iterate. Compliance is a programme, not a project.

Related reads

Loadout

Build your AI agent loadout

Directory
Contact
© 2026 Loadout. Built on Angular 21 SSR.