Why MCP is a nightmare

blog-header-Why MCP is a nightmare

If you don’t know already, I was a major MCP-stan a few months ago when it was first announced, because theoretically – on paper – it’s such a cool idea. Plug-and-play tools for LLM clients? Sign. Me. Up.

However, having had some time to calm down and actually spend time with MCP servers, I may have changed my mind. So, please let me present you with a not so gentle rundown of why MCP, in its current form, is overhyped and undercooked. 

1. It’s an insecure hot mess

This is my main bugbear –MCP was supposed to be supported with a proper server-side component. But in the reference implementations? Nothing. And what followed was the predictable rise of DIY implementations from excited AI devs, which now litter GitHub and Discord servers with homegrown, often-questionable MCP servers. 

That’s right; there are hundreds, potentially thousands, of insecure bits of code written by techbros that you are expected to run as a local userspace process. Any IT-admin, or security professional should be tearing their hair out – what is this absolute nonsense. Not only are these local processes, but they are written in some of the worst possible languages to make them work: Python and NodeJS.

I don’t dislike Node and Python, but they are deeply insecure and prone to supply-chain attacks (looking at you, pad-left). When you install these monstrosities, you JUST RUN THEM FROM THE WEB with NPX or UVX. 

When did we collectively decide that piping to shell from a GitHub repo was OK?

Until servers actively support SSE, with some form of authentication, AND are vetted and governed by an internal IT team, MCP should stay far away from the Enterprise.

Caveat: Of course, if you, as an organisation, are building these MCP servers yourself, then by all means, off you go – that’s all good, but now you run into the other problem…

2. Versioning – when did we care about that?

Unlike mature APIs, MCPs lack established versioning practices. They’re just processes. There’s no standard for compatibility, change management, or even knowing which version you’re running.

If you’re running your own servers, congrats! Now you get to reinvent dependency and version management on top of everything else. Good luck aligning updates across your team, or even knowing when something breaks.

You’ve basically turned a simple integration into a distributed systems problem.

3. It’s a usability hell-hole

Nobody said APIs are easy, but you can just use curl to explore your favourite API. For MCP, you need all kinds of additional language-dependent tooling that lets you explore MCP servers and debug them. For what goal? To enable the LLM client to call an API. 

And when configuring, many of these require you to clone random repos, fiddle with YAML, and cross your fingers. If you’re trying to debug why your LLM can’t talk to your local plugin server, you’re often deep in untyped JSON. Not fun.

The users that are running Claude Desktop or ChatGPT desktop aren’t about to whip out vim and start grokking JSON.

In smarter clients, repositories are provided and managed on your behalf by the client. Think about this scenario I ran into recently:

  1. I open VSCode
  2. I install Cline or RooCode to enable AI code assist
  3. Cline has an MCP repository, so I can install MCPs for it.

It’s a plugin, running (and installing!) a plugin, calling an API. Why on earth are we doing this to ourselves?

4. It’s just SDKs with extra steps

I want to specifically talk about the much-hyped “MCP-to-unlock-agentic-workflows” use case that most vendors will peddle to sell their latest AI tooling. 

It’s nonsense, and here’s why: Unless you are building AGI – or a “generic” agent, your agents will always be custom applications – likely built with agent frameworks like ADK or AgentChat. Your agent will not be a simple YAML file of instructions; it will have a bunch of code to control behaviour. Why would you add MCP process management and API compliance to this when you can just integrate directly?

Remember – your agent is likely being built by AI anyway, so the code treatment can be fire-and-forget, it is cheaper to write a sloppy integration that fits your use case using your AI Code assistant than mess with standards. 

Given how inconsistent models are in terms of repeatability and reliability of responses, we should treat all agents as temporary code artefacts. 

Smolagents-style is the future

This is why I am of the (probably minority) opinion that frameworks like Huggingface’s SmolAgents are actually the way to go. Instead of building persistent infrastructure, they let the model write and execute short-lived scripts in sandboxes to solve tasks.

It’s light, agile, and fits the actual nature of today’s LLMs: flaky, creative, and short-term. It’s also way less brittle than building a whole toolchain around an MCP plugin interface.

Reality check 

MCP is a clever idea, but it’s been overhyped because of a fundamental misunderstanding about where we are in the AI supply chain. We don’t have generic agents. What we have are custom-built, app-specific workflows. 

Until that changes, direct API integrations will be faster, safer, and more maintainable.

That makes MCP useful in exactly one place: chat interfaces. And even there, the experience is often clunky.

Let’s stop pretending it’s more than that.