How evolving team topologies are driving APIOps adoption

Tyk’s history in API management runs strikingly in parallel with the maturing and evolution of the API product industry. When Tyk was founded, it was because our CEO, Martin, needed a solution to a problem right in front of him, and he had no viable alternatives to just doing it himself. He developed a solution in isolation, released it to the open source community, and the rest is history.

At the time Tyk was founded, APIs had been around for a fair while already, but they had typically been treated as essential services, not productisable entities, and so the appetite for investing in teams to develop and manage them was understandably low.

Many organisations ran with a ‘lone wolf’ API development operation, relying on the skills of an individual contributor to support their essential APIs. However, when it became clear that APIs would become a critical element of the emerging ‘internet of all the things’, this attitude began to change.

Let’s explore this transformative journey as organisations recognise APIs as crucial business assets and team structures are evolving, driving the adoption of API operations.

The emergence of API teams 

API management products appeared as a response to a recognised need, but it took a little longer to see the organisational change that really supported their mass adoption. API teams emerged to replace individual contributors, but they remained small and often isolated. These ‘compact teams’ would be alone in calling for API management tooling in their organisation. They often had to fight for what they needed, as the business lagged behind in realising the importance of APIM.

However, time passed and, eventually, organisations cottoned on to the idea that APIs hold intrinsic business value, and, in some cases, can be the primary revenue stream for an entire organisation.

Evolving team topologies

When this happened – and especially in larger organisations used to running distributed teams – there naturally emerged a new team topology (a fancy way of describing the way a team, or team of teams, is structured), one where individual skill-sets were valued as specialisms: API developers, testers, product managers, and so on. In many cases, multiple teams were required as the API development and operation load grew too great to ignore.

These teams follow familiar software development tropes, structuring themselves like any other cluster of development teams would, with centralised resources for everyday needs such as infrastructure management, deployment pipelines and site reliability engineering. It’s these distributed teams which particularly need APIOps and are driving its adoption as an emerging capability.

So, what is APIOps?

Broadly speaking, APIOps is a set of supporting activities around the development, publication and running of APIs, which allow them to be delivered in a secure and reliable way.

It’s relatively early days for APIOps as a discipline, but there are some identifiable strands:

Observability 

A hot topic in the API landscape right now. It wraps features of monitoring and troubleshooting into a Jedi-like knack for knowing things are happening almost before they happen; it allows API teams to provide a rock-solid service to their customers by always staying on top of site reliability. This requires monitoring the health of the service using USE or RED metrics or the Four Golden Signals. Crucially, users also need APIM platforms to support the ‘what now?’ which inevitably arises when a problem is discovered.

Deployment pipelines 

These are intrinsically driven by the distributed team topology but apply to any well-managed (API) development process. With distributed teams all contributing to a common product or catalogue, it’s important to have tools which enable a speedy and efficient deployment path. API development teams are increasingly using multi-environment set-ups to develop and test before going live, and moving new API versions between these environments is considerably easier and safer with dedicated tooling.

Task automation 

As topologies become more complex, there will be a growing focus on automating time-heavy tasks, such as merging and regression testing. This often means using command line tools to interface with software management platforms, as it’s considerably easier to create automation scripts by stringing together commands.

Dry-runs, roll-backs and disaster recovery 

These are vital capabilities for mission-critical assets such as API products. Organisations need the ability to stop changes going to live production, to roll back mistakes quickly and easily with minimal fuss, and to quickly stand up parallel environments when things go south. Cloud computing solves some of this need, but there are still opportunities to support processes with effective ops capabilities.

How Tyk is leading the APIOps conversation 

In the realm of APIOps, Tyk continues to excel in these areas. Our commitment to developing deployment pipelines, automation, and cutting-edge DevX tooling underscores our dedication to the evolving API landscape. Key products like Tyk Operator and tykctl take centre stage in an APIOps environment, offering crucial support for teams navigating unprecedented challenges in API configuration and publication.

Moreover, our engagement with customers extends beyond technology to encompass crucial discussions on team topologies, development pipelines, and effective practices in distributed API development.

As more of our clients grapple with these complexities, Tyk is uniquely positioned to lead the conversation and seize the immense opportunities presented by the dynamic APIOps world. Get in touch with the team to find out more.