OpenClaw (Clawdbot / Moltbot) Still Doesn’t Speak MCP — and That’s the Point
Current status: OpenClaw (a.k.a. Clawdbot) does not have native support for MCP servers — not as a core, first-class feature.
And at this point: it’s not just “not yet.”
A GitHub issue was opened asking for native MCP (Model Context Protocol) support — connect to MCP servers, expose their tools alongside native tools, configure per-agent servers, support stdio and HTTP transports — the kind of thing you’d expect from an agent platform that wants to be extensible without turning into an integration junk drawer.
That issue was closed as “not planned.”
https://github.com/openclaw/openclaw/issues/4834
Why this isn’t a nitpick
MCP is not a marketing acronym. It’s a standard interface for tools.
And in the MCP world, “native” has a meaning:
- stdio transport for local tool servers
- Streamable HTTP transport for remote servers
- optional SSE for server-to-client streaming and notifications
- authorization for HTTP-based transports built around OAuth 2.1
(Yes, real auth, not “hope nobody finds this port.”)
If you don’t treat MCP as a first-class primitive, you don’t get a tool fabric.
You get adapters.
You get wrappers.
You get “just write a plugin.”
You get “this server works on my laptop.”
You get a different tool story for every team.
The predictable outcome: glue code and accidental architecture
When a platform doesn’t speak MCP natively, users end up doing one of two things:
- Running a bridge layer that translates MCP into whatever the host agent understands.
- Re-implementing tool integrations repeatedly, each time slightly differently, with slightly different auth and logging, and slightly different failure modes.
Both “work.”
Neither scales gracefully.
What Clawery does differently: MCP as a core capability
Clawery’s stance is simple: MCP should be native — not bolted on, not plugin-shaped, not dependent on community glue to become usable at scale.
Clawery supports MCP as a first-class, native offering, including everything you’d expect from production tool connectivity:
- HTTP connectivity (remote MCP servers)
- SSE streaming where appropriate
- OAuth2 / OAuth 2.1–style authorization flows for protected MCP endpoints
- consistent tool access across a multi-agent system
In a Clawery system, main agents can use MCP tools, and when your workflow expands into orchestration, sub-agents can use the same MCP tool layer — alongside skills the agents generate and execute as part of the workflow.
This isn’t a demo trick. It’s what “MCP as an architectural primitive” looks like.
If you want the reference point for Clawery’s approach (and the tone this post is written in), start here:
https://clawery.com/blog/clawdbot-aka-openclaw-for-enterprise
Skills: JavaScript, but the execution model is the whole story
Skills are a good idea. The difference is where they run and what they can touch.
Clawery supports agents creating skills in JavaScript, then executing them in a secure enclave-style isolated environment — not in the same runtime that holds secrets and has ambient access to everything.
And secrets?
Clawery uses a secrets store. Credentials live there. They’re not sitting in an agent-reachable config file waiting to be “helpfully summarized.”
Meanwhile, OpenClaw’s secrets story is… local files
OpenClaw is candid in its own security documentation about what lives on disk under ~/.openclaw/.
That includes:
openclaw.json(which may include tokens)- per-agent
auth-profiles.jsonwith API keys and OAuth tokens - channel credential JSON files under
~/.openclaw/credentials/**
This is fine for a local, self-hosted assistant.
It is not what “enterprise-grade tool fabric” looks like.
The punchline
OpenClaw’s community asked for native MCP.
They got an issue.
They got a clear request.
They got a clean description of what “native” should mean.
And they got: closed as not planned.
Clawery made the opposite bet: native MCP, real transports, real auth, a multi-agent design where tools are uniformly accessible, JS skills executed in isolation, and secrets handled like secrets.
If you’re serious about deploying agents as systems — not as experiments — those differences stop being philosophical and start being operational.