MCP, OpenAI function calling and LangChain tools all answer “how does an AI model use external tools?” — but they pick different trade-offs. This post compares them on the axes that actually matter in 2026.
The short answer
- Use MCP when you want cross-host portability and a growing third-party ecosystem.
- Use OpenAI function calling when you are locked into OpenAI’s API and want the simplest possible surface.
- Use LangChain tools when you are building an agent framework and need orchestration helpers (retries, memory, graphs).
Feature-by-feature
| Criterion | MCP | OpenAI functions | LangChain tools |
|---|---|---|---|
| Portability across hosts | ✅ Any MCP client | ❌ OpenAI only | 🟡 Python / JS SDK |
| Remote or local | Both (stdio, HTTP+SSE) | Remote only | Both |
| Typed schema | JSON Schema | JSON Schema | Pydantic / Zod |
| Streaming results | ✅ | ✅ (tool_calls) | 🟡 Framework-dep |
| Resource subscription | ✅ Native | ❌ | ❌ |
| Prompt templates | ✅ Native | ❌ | ✅ |
| Ecosystem size (2026) | 1000+ servers | ∞ (custom) | 200+ official tools |
| Learning curve | Medium | Low | High |
When each one wins
MCP wins when…
You ship a product used across multiple AI hosts — Claude Desktop, Cursor, Windsurf — or you want third-party developers to integrate with you. Write the server once, every host benefits. Example: if you run a SaaS, publishing an official MCP server is the 2026 equivalent of publishing a public REST API.
Also: local-first use cases. MCP stdio transport runs entirely on the user’s machine. No server to host, no OAuth dance. Perfect for filesystem or local memory.
OpenAI function calling wins when…
You are building on GPT-4.x or GPT-5.x and nothing else. The function-calling API is dead simple: define a JSON schema, pass it in the request, handle the tool_calls response. No separate server process. Great for internal tools where “portability” is not a concern.
LangChain tools win when…
You are orchestrating multi-step agent flows with retry policies, shared memory, graph state, or complex tool chains. LangChain (and LangGraph) give you the framework glue. MCP is lower-level — it defines how to talk to a tool, not how to orchestrate ten tools in a row.
In practice, many teams use LangGraph as the orchestrator and consume MCP servers as individual tools. Best of both worlds.
Can I mix them?
Yes, and most production systems do. Common patterns:
- MCP server wraps an OpenAI function-calling app for Claude Desktop users.
- LangGraph agent consumes MCP servers like they were LangChain tools (via the
langchain-mcp-adapterspackage). - Custom backend exposes both MCP and OpenAI-compatible function schemas from the same tool definitions.
What about Google / Anthropic tool use?
Both Gemini and Claude support “native” tool use in their APIs that looks similar to OpenAI functions. Anthropic additionally pushes MCP as the cross-vendor spec. If you are already on Claude, MCP is the canonical path.
Bottom line
In 2026, MCP is becoming the de-facto standard for tool distribution. OpenAI function calling stays the simplest option for single-vendor apps. LangChain remains the king of orchestration. Pick the one that matches the axis you care about most — or combine them.
Browse 130+ ready-made MCP servers if you want to skip building and start shipping.