The (potentially awful) future of APIM?

The (potentially awful) future of APIM?

A very interesting trend that nobody seems to comment on, is when LLMs came onto the scene, folks began to use them to try and generate code. As it became clearer that LLMs were rather good at this task, LLM vendors began optimising and benchmarking for it, seeking software engineering out as a significant use case for the technology to generate some returns on all that investment.

That, of course, has led to alarmists screaming about AI “taking our jerbs”, which – for anyone who has coded with AI seriously – will tell you is nonsense (at least in the short term).

The other thing that has happened is the introduction of “tool use” for LLMs, which has enabled an explosion of AI agent development. It’s an awesome feature that opens up the realms of what an autonomous agent can do to interact with its environment (I’ve written about how there’s a commoditisation process that we might want to encourage with things like Anthropic’s Model Context Protocol here).

The only folks that have taken this trend seriously and made the most of it so far (in my opinion – don’t @-me) is the team at Hugging Face, when they released their agentic framework smolagents.

Smolagents don’t bother with tool use, or not in the way that other frameworks and tools would. Smolagents don’t need something like MCP to grant it access to tools at runtime. Instead, it has only one tool to rule them all, one tool that has near-unlimited expression: coding in Python.

This framework completely bypasses the commoditisation of toolmaking by embracing an ugly truth that even affects human devs (I feel dirty saying this, but I’m guilty of it too): Glue code. Yes, glue code is OK, so long as it’s temporary (right chat? right?).

During my talk at LEAP 2.0, I presented some of my humble predictions for the future (as you do when you are TechBro-CEO-Founder-Man™) – specifically I spoke about how agentic systems will evolve, and why certain developer tooling that exists today is absolutely *ideal* for AI use cases: Functions-as-aService platforms. 

Let me present an idealised (simplified) agentic team, it has at-minimum 4 members:

  1. The Overseer (or Project Manager): This agent takes on the task description and desired outcomes, then devises a team (from a set of pre-set profiles) and a project plan
  2. The Librarian: This agent has access to a single tool: SearchForTools(); it can take a sub-task description and lookup what tools already exist that an acting agent would need to fulfill the tasks.
  3. The Toolmaker: This agent does nothing but create tools that perform a task and pushes them to a FaaS backend, then registers these tools with the Librarian (or a tool called RegisterTool()).
  4. Actor(s): These agents take subtasks and a tool set as an input to perform a specific sub-task

This agentic team scales dynamically: as tasks come in, hyper-focused utility tools are created on demand, registered, and then used by the actors to complete the current task – as well as future ones.

This structure is adaptable, given only basic information about their environment, they could bootstrap themselves into having an ideal toolset for the kind of tasks they need to achieve. A kind of digital evolution into a task-based niche.

Now what’s the issue here for APIM?

If these agents behave anything like the current developer-focussed agents we have like CLine or Roocode, then they will be generating large amounts of these FaaS functions, each one a new service in a field of similar services. The agentic system won’t really care about reuse, but we keep the garbage anyway because for every 100 bits of garbage utility code, 3 tools might get reused in future tasks, and with storage and compute so cheap, why not?

You need to secure this stuff – you need to monitor it too. These agents will be writing code that will be manipulating API calls, they will be needing access to short-lived API tokens for internal services, they will need to respect rate limits, they will need to be monitored to see what they are accessing and when, if they are working with partner APIs, they may even have a budget attached.

Just imagine a scaled future of AI agents – thousands of these little teams running, pushing code, generating services and accessing APIs – nobody is going to read that chat log. No, the observability criteria for monitoring these agents will be through the roof – which is far higher than any regular Service Mesh or centralised APIM strategy is currently ready for.

Furthermore, those utility functions in your FaaS need securing – it may be the bot’s playground, but they each represent an access point into your infrastructure via an LLM and If the LLM vendor is not careful, that hole can leak all kinds of data.

So why is this a troubled future for APIM? Because it all relies on standardisation and automation – heavy automation of system-specific security credentials and the logarithmic growth of badly-documented internal APIs.

If you do not have a robust API management strategy in place, and a clear understanding of how these agentic workflows will use your APIs and your data, you will end up in a situation where you have zero oversight of potentially malicious code running rampant in your network. 

Yes, AI agents are typically supervised, but with agentic workflows, where agents are doing stuff for the user like a magic black box, it’s a different story. These agents are software, they are glue code, but you can’t see what they are doing unless you plan ahead very carefully. Just like with bad code, if it lingers without being monitored, it becomes unmanageable.

So with that in mind, why not get in touch with one of our Solutions Architects and find out how you can best manage the Agentic future of your organisation with robust API governance and oversight.