Deploying Tyk with Unikraft: Exploring the alignment of unikernels with API gateways

tyk-blog Unikernels and their role in API management

We recently dived into the topic of unikernels and their uses (and limitations). If you’re not familiar with unikernels yet, you can catch up here. In short, these specialized, single-purpose operating systems can deliver major security, performance and efficiency advantages. You can use them to package up the thing you want to deploy, with the unikernel image serving as the full operating system and application compiled together. 

As unikernels’ use expands, we wonderis it feasible yet to deploy a unikernel-based Tyk API Gateway? Our curious team decided to find out! Read on to discover how we got on with our proof of concept (PoC). 

Tyk and unikernels 

The Tyk Gateway is well-suited for unikernels, as many of their benefits align with the needs of API gateways. To explore implementing Tyk in a unikernel environment, we used Unikraft – a secure, open source unikernel development kit – to build the unikernel image, hosting it on the Unikraft Cloud. 

Impressively, the PoC successfully demonstrated the feasibility of running Tyk in this setup, so we thought it might be useful to share our experiences below.

Architecture

The PoC used a hybrid architecture, where the gateways were run as a data plane in the Unikraft Cloud, with the control plane components run in a separate, local deployment. This is a typical architecture for distributed solutions, which provides freedom to deploy gateways in multiple locations, whilst retaining centralized management.

 

 

The PoC architecture (simplified)

 

Unikraft configuration

A typical Unikraft build and deployment commonly uses several files:

  • The Kraftfile is specific to Unikraft’s build system. It includes details such as kernel configuration options, dependencies, and any customizations or build options for the unikernel. It provides a higher-level abstraction over the typical build process, making it easier to manage complex unikernel builds.
  • The Dockerfile is used to create a Docker image that contains the necessary environment to build, deploy, or run the unikernel. It specifies the steps required to set up a containerized environment (like installing dependencies, setting environment variables, etc.) for Unikraft to function.
  • The compose.yaml file is part of the Unikraft Compose framework, which simplifies the process of building and running Unikraft-based unikernels in a container-like fashion. You can define services and configurations to describe how your Unikraft unikernel will be composed, built and deployed, defining the structure of the unikernel’s dependencies, runtime configurations and build targets.

In addition to these, there are also the files to include in the unikernel image itself, which will vary depending on the application you’re deploying. These are typically the files that you want to customize for your use case. For example, for the Tyk Gateway, the tyk.conf configuration file is a good candidate, but you could also include custom plugins or any other custom assets.

You can find the Tyk unikernel-unikraft deployment in the Tyk Demo GitHub repository to see the configuration files we used and the information they contain. This demo provides a functioning implementation that you can use as a reference if you’re interested in exploring unikernel-based API gateways in greater depth. 

During the PoC, we built a gateway service using a Kraftfile and made port 443 available publicly via the Unikraft Cloud, mapping to port 8080 in the gateway unikernel. We used environment variables to set the MDCB-related variables that allow the gateway service to communicate back to the control plane, loading them from a .env file. We also imposed memory limits on the gateway and redis services. 

Note that you can add additional files to unikernel images as you build. For example, the Dockerfile in our PoC refers to a rootfs directory from which it copies files. Typically, these will be files or assets which require customization, such as configuration files or custom plugins. In the case of this PoC, just the tyk.conf is specified, enabling us to specify configuration for Redis and MDCB.

Getting to know the Unikraft CLI

One handy tool we used during this unikernel API gateway PoC was the Unikraft CLI. This is a command-line tool for managing Unikraft unikernels, which allows you to create new projects, configure build options, manage dependencies and compile unikernels for different platforms. It’s modelled after popular package manager tools, so provides commands for initializing Unikraft environments, selecting libraries, customizing configurations and building images. 

The Unikraft CLI is designed to simplify the development and deployment of unikernels by automating common tasks and integrating with existing toolchains, so is a handy tool to use if you’re running your own unikernel PoC. For example, you only need this single command to build the unikernel and deploy it to the Unikraft cloud:

kraft cloud compose up --env-file .env

It reads configuration from the local Dockerfile to understand what services to build and how to build them, then pushes the resulting images to the Unikraft Cloud. You can also use the CLI to see information about the services you’ve deployed and combine it with other tools to extract specific values from a service. 

Key learnings 

There were various things we observed and took note of during the Tyk API Gateway unikernel PoC, including:

  • Stateful scaling, where new instances of a service load their initial state from cache, rather than booting cold, can be very helpful for services that have network-based dependencies during startup. An example is the gateway loading configuration from MDCB.
  • Boot time observations: One of the most useful metrics for understanding service behavior is boot time, as this is critical for tuning other configurations and effective stateful caching. During our PoC testing, the data plane gateway booted in about 100ms, reducing to 10-15ms once the state is cached. In comparison, a gateway configured to load API definitions from disk booted in about 70ms, also reducing to about 10-15ms when state caching was enabled.
  • Cooldown time, which is the cutoff that, when exceeded, causes the service to be scaled down, needs careful thought. Setting it to the correct duration depends on the application’s behavior, especially boot time. If the setting is too low, the service may end up being scaled down before it has been able to fully load its configuration, negating the value of stateful caching, so startup behavior must be understood to set the value accurately. 
  • Infrastructure resources: Unikraft Cloud is single core only, so scaling is only available in a horizontal sense. However, Unikraft does support POSIX threads, so multi-core execution should be possible, though perhaps only with on-premises deployments. The maximum RAM consumable by a service is configurable via the compose file. The PoC gateway was able to run with 512MB, but we found that using 1024MB produced quicker boot times. Additionally, the PoC gateway reported that the file descriptor limit was 1024 – far below Tyk’s recommended level of 80000 for production. However, this does align with the single core processing and concept of horizontal scalability, and it may be something that can be controlled with an on-premises deployment.
  • Environment variables can define configuration, in the same way as with a Docker compose deployment. This allows sensitive or variable configuration to be kept outside of source-controlled files. For example, our PoC used a .env file to store the MDCB token and URL, as these vary across each deployment. The .env file can then be provided as an argument when the kraft CLI is called:

kraft cloud compose up --env-file .env

Want to see the PoC in action? You can watch the results here for further insights into how to deploy a unikernel-based Tyk API Gateway.