Making sense of MCP: Why standardization matters in the AI supply chain

blog-header-Making sense of MCP

Not to toot my own horn, but at the Tyk LEAP 2.0 conference in February, I predicted we’d see a standard for LLM-to-LLM interoperability hot on the heels of Anthropic’s Model Context Protocol (MCP). And wouldn’t you know it, a month later, Google open sources their Agent-to-Agent (A2A) Protocol: a spec for agents to discover, call, and collaborate. Spooky.

Why does this matter? Because in the AI supply chain, we lack a shared language, something I’ve spoken about before. We need clear, agreed-upon standards for how components (LLMs, agents, APIs, you name it) can plug into each other securely and sensibly. Without that, enterprise adoption is high-risk. And high-risk doesn’t scale.

MCP was supposed to be a step in the right direction. And in some ways, it is. But its initial focus was narrow: user-to-LLM interaction, mostly in a consumer context. Handy for solo hackers, sure. But if you’re a CIO and you find out your staff are spinning up MCP servers on their corporate laptops? You’re well within your rights to panic and deploy the ban-hammer.

Some have joked that the “S” in MCP stands for “security”. But let’s be clear: security was never the design priority, at least not how it’s currently used in the wild. If you’re not familiar, here’s how a typical MCP setup looks today:

  1. You have an MCP-compatible app, like a code editor or a native chat client.
  2. You then download (usually with a node NPX or Python pip command) the server and just *run it* – like some psychopath, straight from wherever it came from.
  3. You start interacting with your LLM on your corporate PC and feed files, emails, names, and other PII to the LLM.

Insanity.

MCP in this form is 100% not ready for, and should never be used by, your staff. That said, in 99% of real enterprise environments, most users won’t even be able to run the server in the first place. Installing and maintaining a Node.js or Python environment with the necessary packages is usually and thankfully blocked by IT policies.

MCP has always included support for remote servers. That’s where things get interesting and actually useful. A remote MCP setup, provided and secured by the org itself, allows companies to give their teams access to these tools in a managed, auditable way. It’s safer, saner, and frankly the only real way forward for enterprise AI tooling. Unfortunately, at the time of writing, very few native MCP clients support remote MCP.

What we’re seeing now is early signs of a proper AI interoperability stack emerging protocols like MCP and A2A laying the groundwork. Yes, they’re messy right now. But even the existence of these specs is a huge step. It encourages innovation, better prompting, and new client behaviours, all based on a shared standard.

And that’s the crux: standards create structure, and structure drives adoption. Without them, the AI supply chain is duct tape and dreams. With them, we get a reliable foundation to build real enterprise-grade systems safely, repeatably, and at scale.

At Tyk, we see MCP as a cornerstone of the future AI stack, but only if implemented correctly. That’s why we’re building the tools to make enterprise-ready, secure interoperability a reality.

If you’re interested in how we’re making that happen, read part two of this series: Building the AI supply chain: How Tyk is shaping MCP in practice.