Think about how a modern smartphone is built. Every device contains thousands of components, and companies take different approaches to integrating them.
On one side, there’s Apple, which tightly controls every aspect of the iPhone’s ecosystem – hardware, software, and service – minimizing third-party influence. On the other side, Android fosters an open ecosystem where different manufacturers, chipmakers, and app developers contribute to a modular and customizable experience.
Now, what if I told you AI is heading down a similar path? We’re at an inflection point where the AI market is bifurcating: the all-encompassing bet on AGI versus an open, flexible ecosystem. But before we get into that, let’s talk about AI governance – because for enterprises, structuring AI correctly is just as critical as deciding which AI models to use.
Why AI governance matters
AI governance is often framed in high-level ethical terms – bias, alignment, existential risks. While important, these conversations often overlook something more immediate: the practical supply chain that enables AI to function effectively in an enterprise.
Consider this fundamental question: Who controls AI in the enterprise? Should businesses be locked into a single vendor’s AI ecosystem, or should they have the flexibility to mix and match AI models, tools, and platforms to suit their needs?
This is an age-old debate in IT. It’s called Shadow IT, and AI is not immune to it. Without a structured approach, enterprises will find themselves struggling with vendor lock-in, security risks, and operational inefficiencies.
The key AI challenges enterprises face
1. Vendor lock-in & lack of flexibility
Many AI providers aim to be a one-stop shop, capturing maximum value by locking enterprises into their ecosystem. While this strategy builds business moats, it limits an enterprise’s ability to adapt.
AI evolves rapidly. If you’re locked into one ecosystem, how do you pivot when a better model emerges? How do you integrate AI capabilities that align with shifting business strategies? Composability and flexibility must be priorities when evaluating AI solutions.
2. Security & data compliance risks
Who owns your AI outputs? AI models are black boxes – once data leaves your network, it’s often out of your control. Many enterprise AI agreements attempt to mitigate this with legal safeguards, but ultimately, compliance is only as strong as its enforcement.
A related problem is Shadow AI, where employees use unauthorized AI tools, potentially exposing proprietary data. The classic example? GitHub Copilot inadvertently reproducing proprietary code snippets. Enterprises must implement robust AI governance to prevent data leakage and ensure compliance.
3. Operational inefficiencies
Talk to any CIO, and you’ll hear concerns about fragmented AI deployments. Different departments experiment with different AI tools – finance uses one model, customer support another, IT something else – creating silos with no central oversight.
This inefficiency mirrors early BYOD (Bring Your Own Device) challenges. The solution wasn’t banning personal devices but rather embracing structured governance, similar to how Federated API Management provides oversight while allowing flexibility. Enterprises must take a similar approach to AI – enabling decentralization while maintaining control.
The four building blocks of AI adoption
When structuring AI adoption, enterprises must consider four key components:
- Vendors – The AI models themselves (e.g., OpenAI, Anthropic, Meta, Mistral).
- Interfaces – How users interact with AI (chatbots, enterprise tools, assistants).
- Data – The lifeblood that makes AI valuable (internal documents, customer interactions, logs).
- Tooling – The governance layer (monitoring, security, compliance mechanisms).
Consider a financial firm deploying AI for customer support. It needs:
- A model like GPT-4.
- An interface like a chat bot or assistant to access it
- An internal knowledge base to reference.
- Compliance guardrails to ensure accuracy.
Each of these components forms part of the AI supply chain – an interconnected pipeline of vendors, interfaces, data, and tooling that must be structured effectively.
In every other industry – manufacturing, software, shipping – supply chains enable specialization and flexibility. In my humble opinion, AI should work the same way.
It’s all APIs, baby
Under the hood this pipeline is essentially just a bunch of API calls (funny how we’re framing this eh? Being an APIM vendor and all).
AI is ultimately given value and purpose by the agency and knowledge it is given by its user, and the underlying transport for that knowledge and agency is the humble API call.
Now I’m not sure how many of you remember the horrors of early API standardisation, yes, I’m talking about SOAP and XML, and all of the WS-* standards that erupted when enterprise vendors decided they needed to standardise but still wanted to carve out niches of their own.
We’ve come a long way since then. The OpenAPI spec for transportable API definitions, the AsyncAPI attempting the same for real time APIs. We still don’t have a single, unifying standard, but what we do have – at least in most cases, are APIs that are still usable by the humble human with some rudimentary tooling, and without the need for massive libraries or SDKs to build out integrations. We also have industry-wide standardisation, for example with OpenBanking, suddenly financial interoperability has created massive flexibility in the fintech industry.
LLM APIs are a class of their own
Like Banking, AI APIs are different – because they are a class of APIs that are inherently similar.
Most LLMs take the same parameters to tune their output, the more advanced ones have space for different input types (images, audio, video, text etc.) and then some more have this fantastic tool use capability.
The image below is an OpenAI API call. If you look at this and compare it to Anthropic, they would look very similar, but still be different enough to need a custom integration.
Now, speaking of Anthropic: An example that promotes this supply chain-isation of the AI ecosystem, and something I am very excited about is Anthropic’s Model Context Protocol (MCP). It provides a clean approach to adding tooling and extensions to an AI model, and moreover the tooling that provides the interface to that model, a way to integrate with the user’s own machine and accessible services. For example – as we said earlier – the interface is a key component in how we interact with the AI.
Model Context Protocol: The first stage of a supply chain for AI?
To explain, the MCP defines and standardises the interfaces between the LLM interface, and the tools and data sources that are available. So if your tool speaks MCP, then it can be registered with the interface and made available to the LLM, and if you are using another interface – maybe an IDE or an integrated assistant, then if that speaks MCP, then you can register your service with that interface and it instantly becomes available to the LLM.
This is an excellent first step towards making the data-supply side, or the upstream part of the supply chain more modular, and much more flexible for tool creators to get involved and supply value into the wider AI ecosystem.
What if?
But it’s only a first step, what I would really like to see from LLM vendors is a standardisation of their downstream interfaces, the actual API calls to the service, and in turn, potentially even between LLMs. This isn’t something that needs to be set in stone, but should find some kind of common baseline that can make completions standard between vendors that does not rely on glue-code from things like langchain.
I mean, AGI-advocates might say – why not just let the AI write the glue code and then live with that?
You know what – that’s a potential future – it’s probably not in scope for this talk, but the other future I foresee is Agents building glue-code nano-services, deploying the FaaS platforms like Lambda, and then just using endless iterations of these to handle whatever tasks they are trying to achieve. We assume that an AGI will be efficient, and optimal, but it has been trained on the sum of human knowledge, and the people are messy, so it will undoubtedly pick up bad habits from us – including spaghetti code!
Which is why, ultimately, I think that to get real value from your AI journey, an enterprise needs solid governance.
So what is best for the Enterprise?
What enterprises need in their AI supply chain
For enterprises to truly benefit from AI, they need:
1. Choice
Enterprises must be able to swap AI models or providers without significant disruptions. Yet, this remains difficult due to inconsistencies in API structures and model behaviors. For example, OpenAI’s GPT-4o behaves very differently from Claude or DeepSeek, requiring extensive customization.
A well-structured AI supply chain ensures that organizations can adopt new models, manage bias, and fine-tune performance without excessive overhead.
2. Confidence
Businesses need assurance that AI is using the right data, under the right security frameworks, and by the right teams. This requires strong governance frameworks to prevent data leakage and unauthorized access while ensuring that enterprise data is used effectively without exposing sensitive information.
3. Changeability
AI must integrate seamlessly into existing workflows. While chat interfaces are great, not every AI interaction should be confined to a chatbox. AI capabilities should be embedded across enterprise tools – CRMs, dev environments, financial software – without forcing users into entirely new work paradigms.
The future of AI in enterprise: AI as a structured ecosystem
AI is often compared to electricity – an omnipresent, transformative force. But electricity required structured grids and regulations to unlock its full potential. AI is no different. Enterprises need structured AI supply chains to truly harness AI’s power.
The real question for businesses isn’t “Which AI model should we use?” – it’s “How do we structure our AI supply chain to stay flexible, secure, and scalable?”
And if AI supply chains are fundamentally an API problem, well – you can’t have AI without APIs. It’s time to structure AI adoption the right way.