Full lifecycle API management for Kubernetes, courtesy of Tyk Operator 

Tyk Operator brings full lifecycle API management capabilities to Kubernetes. Splendid. But what does that actually mean in practice? Read on to find out…

In short, using Tyk Operator means you can manage APIs on Kubernetes more easily, more efficiently and more safely. You can deploy applications and associated API updates using a single, streamlined process within your DevOps and GitOps flow. All with versioning that means you can roll back as easily as reverting a commit in Git, should you need to.

We’ll dive into the details of what Tyk Operator is below, but first let’s enjoy a quick refresher on why Kubernetes is so popular and how microservices and containers can benefit your enterprise.

What has this got to do with software delivery?

Agile development has taken the world by storm. It provides increased delivery velocity and improved stability. This means we can deliver incremental value and bring actual benefits to users in a continuous and efficient way through regular releases.

To measure software delivery performance, more and more organisations are turning to the four key metrics as defined by the DORA research program:

  • Deployment Frequency
  • Lead Time for Changes
  • Change Failure Rate
  • Time to Restore Service

At a high level, Deployment Frequency and Lead Time for Changes measure velocity, while Change Failure Rate and Time to Restore Service measure stability. The ambition to release changes to solve customer problems as quickly as possible, whilst maintaining production stability, has led to the adoption of microservices software architecture and DevOps practices.

Why microservices?

In a microservices architecture, the application is developed as a suite of small services, each running in its own process and communicating with other services using lightweight mechanisms – often an HTTP resource API.

The microservices architecture was created to address problems caused by historically monolithic applications. In a monolith architecture, everything is tightly coupled, with no clear boundaries between elements of the application. It often takes a long time to test a change or coordinate between teams, to make sure changes in one place won’t cause unexpected problems elsewhere.

With a microservices architecture, the system is broken into small services that are built around business capabilities. The development team responsible for a service becomes the domain expert for that business capability. They can understand customer problems and design solutions for them quickly and efficiently. Services communicate with each other over published, documented API specifications, making integration easy and safe.

Why containers?

Once development is complete, we need a fail-proof way to ship the service to different runtime environments: from the developer’s laptop to staging and then to production. Containers help by packaging your software (e.g. a service) with everything that it needs to run, including code, dependencies, libraries, binaries and more. The container provides a consistent runtime environment across multiple environments and platforms.

Containerisation is a widely adopted tool for continuous delivery. Containers can be created and deployed to any environment quickly, greatly reducing the lead time for changes. The consistent runtime environment also ensures that the application will not fail due to different environment settings in production.

Why Kubernetes?

In a production environment, it’s important that the service is available and will scale up when demand surges. That’s the job of a container orchestration platform – both monitoring the ‘liveness’ of container instances and deploying additional instances of the container when required.

Kubernetes is a portable, extensible, open source container orchestration platform. It automates deployment, scaling and management of containerised applications. It has become the de facto choice for managing rapidly growing containers everywhere.

In Kubernetes, the set of computing resources that run containerised applications is called a cluster.

The magic of Kubernetes lies in automation. Through automation you can run thousands of clusters on Kubernetes and expect the same consistent behaviour out of the clusters. Let’s look at how…

Declarative configuration and automation

A declarative configuration defines the desired state for a system – i.e. the “what”. In contrast, an imperative configuration defines how the desired state should be achieved, often as a set of instructions to reach that desired state.

Kubernetes accomplishes automation by use of declarative configurations and a set of controllers that observe and rectify the behaviour and state of the system.

When you deploy a containerised service in Kubernetes, you define the desired state of that service, for example the number of concurrently running instances. The declarative configuration (desired state) is specified using a .yaml file. Kubernetes then continually and actively manages every object’s actual state to match the desired state you defined.

For example, if you wanted a chef to create a cake, a declarative spec could look like this:

cake:

  flavour: chocolate

  shape: star

  toppings: 

  – strawberry

  – blueberry

As soon as you apply the above spec to the Kubernetes Kitchen, the chef would check that no such cake currently exists and then would immediately start working to bake you a cake to that specification. Once the cake you want is on your plate, they would stop baking. The moment you devoured it, however, the chef would make another one (with the same flavour, shape, and toppings) as their aim is to maintain a supply of available cakes to the programmed specification.

Out of the kitchen and back in Kubernetes, a deployment is an object that represents an application running on your cluster; a deployment specification is used to configure the deployment.

When you create the deployment, you might specify that you want three replicas of the application to be running. The Kubernetes system reads the deployment spec and so starts three instances of your desired application, updating the status to match your spec. If any of those instances should fail for some reason (i.e. there is a change to the status), the Kubernetes system responds to the difference between spec and status by making a correction (in this case, starting a replacement instance). In this way, Kubernetes automates the provision of service instances in the microservice environment.

API management on Kubernetes

It’s now possible to achieve the same level of automation for API management on Kubernetes (thanks, Tyk Operator!). This means that:

  • A developer can describe the API they want – the API endpoint, authentication, rate limits, policies, etc. – and then let the system bring it to life. 
  • An operational engineer can entrust Kubernetes’ self-healing power to Tyk’s APIs, the same as other Kubernetes-native objects. 
  • A developer can reuse the same tested API specifications across multiple environments and clusters, with the confidence of getting the same consistent behaviour. 
  • A developer can automate deployment of APIs, applications and services, using the same pipeline. 
  • A developer can easily roll up or roll back a change by following the same process.

This is how Tyk Operator can make managing APIs on Kubernetes easier, quicker and safer.

Step forwards, Tyk Operator!

Tyk Operator is an agent which can be deployed to your Kubernetes cluster. It allows Tyk APIs or policies to be specified as configurations. Tyk Operator watches for any divergence between desired state and actual state on the gateway. For instance, if the rate limit of a service is updated in the configuration, Tyk Operator instructs the gateway to update that. See it in action!

The same API specifications can be applied against different environments, with the assurance that it will behave just the same across them all.

If your environment is properly set up with DevOps and GitOps flow, then application deployment and associated API updates can be done using a single streamlined process. A developer would interact with their usual Git flow to commit codes and API configurations, without the need to stray away to the Dashboard.

Every change of the API configurations can be versioned. If something is misconfigured, rolling back the system is as easy as reverting the commit in Git. No more stress when you need to restore a service in production!

The roadmap

Tyk Operator is an open source project and is still under active development. We want to make Tyk the go-to choice for API management in Kubernetes, facilitating it through Kubernetes-native toolings and workflow. This means we have big plans for the year ahead, including:

  • Greatly improving Tyk Operator documentation, so that users can easily get onboard with this awesome tool.
  • Expanding the capabilities of Tyk Operator to integrate exciting features like Universal Data Graph, GraphQL Federation, OAS and Tyk Developer Portal.
  • Focussing on the Kubernetes developers’ experience and demonstrating our ability to work well with our clients’ deployment patterns, security requirements and integration needs.

Of course, if we’ve already whetted your appetite sufficiently, you can get in touch for a chat or discover more about adding API management to Kubernetes today.