Skip to main content
Tutorial8 min read

Self-hosted MCP registry: build your own internal catalogue

When enterprise policy says no public registries, you build one. A practical architecture, signing workflow, and client config for a self-hosted MCP registry.

Enterprise security teams are banning the public MCP marketplaces, one at a time. The alternative is a self-hosted registry: internal, reviewed, signed, on a network you control. Here is a working architecture and the ~300 lines of code that make it real.

Why you cannot use the public registries

Three drivers push enterprises to build their own:

  • Supply chain risk — any public MCP is a potential supply chain vector.
  • Data residency — telemetry from a public registry leaks which tools your org is exploring.
  • Internal tools — most enterprises have custom MCP servers that cannot live on public registries at all.

What a registry actually stores

Three things:

  1. Metadata per server: name, version, description, maintainer, scope, reviewed-at.
  2. Artefact — the actual package, typically a tarball, a Docker image reference, or a git tag.
  3. Signature — a cryptographic signature over both, anchored to a trusted key.

Minimal architecture

developer → submit PR → review → sign → registry.internal
agent host ← pull metadata ← verify signature ← launch
  • Static site (S3 + CloudFront) for metadata JSON and signed artefact URLs.
  • A tiny signing server (Lambda, Cloud Run) that signs approved artefacts with an HSM-backed key.
  • A client-side verifier that every MCP host runs before launching a server.

No database required for the basic case; the git repo is your source of truth.

Ingestion workflow

  1. Maintainer opens a PR adding a servers/<name>.yaml file with metadata and the artefact URL.
  2. CI runs static checks: npm audit, dependency diff against last version, malicious pattern scan.
  3. Reviewer approves.
  4. Merge triggers the signing server, which fetches the artefact, hashes it, and emits a signature next to the metadata.
  5. Static site rebuilds.

Distribution workflow

Client side, simplified:

// verify-and-launch.ts
import { verify } from './crypto';
const metadata = await fetch(`https://mcp.internal/servers/${name}.json`);
const artefact = await fetchArtefact(metadata.artefactUrl);
const ok = verify(metadata.signature, hash(artefact), trustedPublicKey);
if (!ok) throw new Error('signature invalid');
await launchMcp(artefact);

Every MCP host on an employee laptop runs a wrapper that calls this verifier before it launches anything.

Audit log

Log every fetch from the registry with user_id, server_name, version, and verification outcome. This becomes evidence for your audit trail.

Metadata schema

# servers/internal-postgres.yaml
name: internal-postgres
version: 1.2.0
maintainer: data-platform@example.com
description: Read-only access to the finance data warehouse.
scope:
  - read:finance.*
data_classification: confidential
lawful_basis: contract
reviewed_at: 2026-04-15
reviewers:
  - security@example.com
  - data-platform@example.com
artefact:
  url: oci://registry.example.com/mcp/postgres:1.2.0
  sha256: 3f9a...
signature: base64:...

What you get once it is running

  • Every MCP server in production is traceable to a reviewed PR.
  • Credentials live in scope-pinned metadata, not copied into configs by hand.
  • Version pinning is enforced; rolling back is a config change.
  • Internal tools ship on the same rails as reviewed third-party ones.
  • Your security team can grep the registry for any deprecated dependency and track who still uses it.

What you cannot skip

Three non-optional controls:

  1. The signing key lives in an HSM or KMS; no raw key material on disk.
  2. Verification happens client-side, not just at ingestion.
  3. The trust root (public key) is deployed to hosts through the same channel as OS config — not fetched from the registry itself.
Loadout

Build your AI agent loadout

Directory
Contact
© 2026 Loadout. Built on Angular 21 SSR.