Skip to main content
Explainer4 min read

AI agent liability framework: who is on the hook when the agent gets it wrong

When an autonomous agent causes harm, who pays? The vendor, the operator, the user, or the model developer? The emerging legal frameworks (EU AI Liability Directive, US state-level, sectoral) and the contracts that allocate the risk.

"The agent decided" is not a defence. When an autonomous agent causes harm, somebody pays. The legal frameworks that allocate the bill are crystallising in 2026, and the contracts you sign now decide where you sit when they do.

Why this matters now

Three drivers:

  • First litigation waves — agent-caused damages reaching court in 2025–2026.
  • EU AI Liability Directive — proposed framework that puts agent operators on the hook by default.
  • Insurance market — D&O and E&O insurers requiring liability frameworks before they renew.

The teams without a framework face two outcomes: refused coverage, or unfunded liability.

The four parties in the chain

Every agent deployment has up to four parties:

  • Model vendor (Anthropic, OpenAI) — provides the foundation model.
  • Tool / MCP server vendor — provides specific capabilities the agent uses.
  • Agent operator — runs the agent (this is usually you).
  • End user — interacts with or is affected by the agent.

Liability flows differently in each jurisdiction. Most concentrate on the operator.

Where each jurisdiction lands today

EU

The AI Liability Directive (proposed) creates a rebuttable presumption that the operator caused the harm if the agent system was high-risk and the harm matches a known failure mode. The Product Liability Directive update extends "product" to include software, including AI.

US

Patchwork. Some state-level laws (California, Texas) explicitly assign liability to operators for autonomous decisions. Federal sectoral rules (CFPB, FTC) hold the operator responsible for outcomes.

UK

Common-law negligence framework, with Treasury proposals for AI-specific liability under consultation.

China

The PIPL and emerging AI rules hold operators responsible by default; vendor liability requires explicit contract provisions.

What contracts should allocate

Six clauses every agent contract should address:

1. Liability cap by party

Who pays up to how much. Typical: vendor caps at fees paid; operator carries the rest.

2. Exclusions

What is explicitly not covered. Vendor will exclude misuse, prompt-injection, jailbreaking. Negotiate the line.

3. Indemnification

Who indemnifies whom. Vendor often indemnifies on IP claims; operator on use-case-specific claims.

4. Insurance requirements

Both sides carry minimum coverage. Often $10M for vendors; varies for operators.

5. Right to audit

Operator's right to audit the vendor's controls (security, compliance, training data).

6. Notification SLAs

Time from incident discovery to notification of the other party.

What the operator is on the hook for

Three categories:

  • Direct decisions the agent made (the agent recommended; you took the action).
  • Foreseeable failures — known failure modes you did not mitigate.
  • Privacy and security breaches in your operation.

The operator's defence: documented governance, eval evidence, mitigation evidence. See governance framework.

What the vendor is on the hook for

Narrower scope:

  • Defects in the model that violate documented behaviour.
  • Training data issues that produce IP claims.
  • Service outages beyond stated SLAs.

Vendors push hard to limit beyond this. Larger operators have leverage to negotiate.

Insurance landscape

Three product lines emerging:

  • AI Errors and Omissions — coverage for operator-caused harm via agent decisions.
  • Cyber+AI — extends cyber policies to cover AI-specific incidents.
  • Vendor warranty insurance — covers vendor-side failures separately.

Premiums in 2026 vary widely; the difference between a documented governance programme and none is 2–4x in price.

What does NOT shield you

  • "The model said so" — not a defence in any jurisdiction.
  • Vendor disclaimers alone — operators are liable for use, not just defects.
  • "We followed the docs" — necessary, not sufficient.
  • Open-source disclaimer of warranties — partially effective; operator liability remains.

The supply-chain question

Liability follows the value chain. If you wrap a vendor's MCP server and resell it, you inherit a portion of liability for what it does. Three patterns:

  • Pass-through — explicit reseller status with vendor flowing terms downstream.
  • Repackaging — you take on full operator liability.
  • Co-branded — joint liability, allocated by contract.

Most product companies are repackaging without realising it.

Common mistakes

  • No contract review for AI clauses — old contracts do not cover this.
  • Trusting click-wrap vendor terms — they default to maximum vendor protection.
  • No insurance — D&O claims for AI failures are now real.
  • No incident response runbook — liability triggers timeline-sensitive obligations.

Where this is heading

Three trends by 2027: AI Liability Directive enters force in EU, professional indemnity insurance becomes standard for AI consulting, and class-action-friendly rulings establish precedent. Document your operations now; contract carefully; insure adequately.

Loadout

Build your AI agent loadout

The directory of MCP servers and AI agents that actually work. Pick the right loadout for Slack, Postgres, GitHub, Figma and 20+ integrations — with install commands ready to paste into Claude Desktop, Cursor or your own stack.

© 2026 Loadout. Built on Angular 21 SSR.