Setting up an API gateway on a multicloud Kubernetes cluster using Tyk

The digital transformation trend heightens the need for efficient IT infrastructures, leading to the rising prominence of Kubernetes, an open-source platform for managing containerized applications. Particularly, its multicloud functionality offers a number of benefits that revolutionizes the way organizations operate and deliver value to their customers.

By distributing a Kubernetes cluster across multiple cloud service providers, your organization avoids vendor lock-in, increases its geographic diversity, gains greater reliability and availability in the event that one provider experiences downtime, improves user experience by positioning services closer to end users, helps optimize costs by choosing providers with the best terms, promotes better disaster recovery planning, and reduces the risk of single-point failures. Essentially, a multicloud Kubernetes setup improves resiliency.

This brings us to today’s topic, setting up an API gateway on a multicloud Kubernetes cluster using Tyk. It turns out that setting up an API gateway on a multicloud Kubernetes cluster takes the benefits described above to the next level. For instance, it enables efficient traffic routing across clusters, allowing for robust multi-environment deployments. This unified API approach simplifies the configuration of API Gateways and facilitates traffic splitting for enhanced performance.

Beyond routing, an API gateway also provides a layer of abstraction between client interfaces and backend services, easing tasks like authentication and rate limiting. Additionally, it allows for straightforward data aggregation from multiple sources since all traffic must pass through the API gateway. This means it can play a critical role in effective multicloud and multi-cluster management.

In this tutorial, you’ll learn how to configure an API gateway in a multi-cloud Kubernetes cluster using Tyk. Let’s get started by reviewing what you need to follow along.

Configuring a Kubernetes multicloud cluster

In addition to a basic understanding of Kubernetes and command line tools, you need the following prerequisites:

  • A local machine with `kubectl` and `helm` command line tools installed.
  • A minimum of two cloud provider accounts. Here, you’ll use Linode and DigitalOcean. Simply sign up using your email, Google, or GitHub account.

When you’re finished setting everything up, you’re ready to set up your Kubernetes multicloud cluster.

Select a Kubernetes distribution that supports multicloud

Implementing a multicloud approach on Kubernetes involves several steps, starting with choosing an appropriate distribution.

Kubernetes has several distributions, each with a specific purpose. For instance, there are open source distributions that closely follow vanilla Kubernetes releases or the so-called upstream distributions that come without any add-ons. Additionally, there are distributions focused on development that are usually lightweight and portable. There are also both open source and closed source distributions that come with some useful add-ons preinstalled out-of-the-box. If you’re interested, you can find a handy matrix that includes most of the active Kubernetes distributions as well as their main features on Nubenetes.

That said, you may be wondering, does it matter which distribution I choose? The short answer is yes, it does.

The best practice is that you should choose a distribution with the following characteristics:

  • Open source to avoid vendor lock-in
  • Preferably with a DevOps approach, meaning that it follows the best development practices
  • Tools or add-ons that facilitate its deployment in different cloud providers in a reproducible way

Here, you’ll use Rancher Kubernetes Engine 2 (RKE2). It’s an open source, DevOps-friendly, Cloud Native Computing Foundation (CNCF)–certified Kubernetes distribution that has a straightforward installation.

Now that you know which Kubernetes distribution to use, it’s time to prepare the infrastructure.

Provision Kubernetes infrastructure on different cloud providers

Start by spinning up at least one virtual machine (VM) on each cloud provider. For this tutorial, you’ll create a VM on DigitalOcean and also launch a VM on Linode.

Once you have SSH access to both VMs, you can go ahead and prepare the nodes to deploy Kubernetes on each one. For this, you must edit `/etc/hosts` and `/etc/hostname` on each VM and assign a fully qualified domain name (FQDN) to each. This simple configuration is necessary for management reasons since RKE2 enforces best practices and requires accessing ports using fully qualified domain names.

 

Set up Kubernetes control-plane and worker node

Let’s start by installing RKE2’s `rke2-server` on the control-plane node. Fortunately, this convenient script does all the heavy lifting for you:

 

```shell

curl -sfL https://get.rke2.io | sh -

```

 

Once the script finishes, you must enable and start the `rke2-server` service. This can take a while as some Kubernetes components and tools like `kubectl`, `crictl`, and `ctr` will be downloaded and configured when the service is first booted. 

By default, tools are stored in `/var/lib/rancher/rke2/bin/` and kubeconfig in `/etc/rancher/rke2/rke2.yaml`, meaning that you can verify the installation by running this:

 

```shell

/var/lib/rancher/rke2/bin/kubectl --kubeconfig=/etc/rancher/rke2/rke2.yaml cluster-info

```

 

If everything goes as expected, the output should look like this:

 

```shell

Kubernetes control plane is running at https://127.0.0.1:6443

```

 

Now that the control plane is ready, you must copy it to a safe place that the access token generated during its first boot. To do so, run this command: 

 

```shell

sudo cat /var/lib/rancher/rke2/server/node-token

```

 

With everything you need at hand, it’s a good idea to copy the kubeconfig stored at `/etc/rancher/rke2/rke2.yaml` to your local machine in order to be able to manage your Kubernetes cluster more conveniently. Don’t forget to edit the line with the control plane IP address `server: https://127.0.0.1:6443`. Make sure you replace `127.0.0.1` with the corresponding public IP address of your node.

Now it’s time to SSH to the worker node. First, run the same RKE2 bootstrap script you used for the control-plane node, but this time, add the environment variable: `INSTALL_RKE2_TYPE=agent` to install the `rke2-agent` service instead of the `rke2-server`:

 

```shell

curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -

```

 

Next, create the configuration directory `/etc/rancher/rke2/`. Inside this directory, create a file called `config.yaml` and paste the values corresponding to the IP address and access token of the plane node control:

 

```yaml

server: https://{CONTROL-PLANE-IP}:9345

token: {ACCESS-TOKEN}

```

 

After saving `config.yaml`, enable and start the `rke2-agent` service, just as you did before with the `rke2-server` service.

Congratulations! You just set up a multicloud Kubernetes cluster.

With this newly acquired knowledge, the possibilities are endless. You can add more nodes to your cluster, either using these two cloud providers or different ones. The procedure is the same.

For more information on how to install RKE2, check out this Quick Start guide. Additionally, if you’re looking for more information on the options available for the bootstrap script, take a look at the configuration options in the official documentation.

Once your multicloud Kubernetes cluster is up and running, it’s time to optimize it by setting up the Tyk API gateway.

Implement an API gateway using the Tyk API Gateway and Tyk Operator

The Tyk API Gateway and Operator offer several advantages in a multicloud Kubernetes implementation. For instance, the API Gateway centralizes traffic, enabling efficient API management by routing both internal and external API calls throughout the cluster. This enhances scalability and flexibility and simplifies troubleshooting.

Additionally, it supports various security types, allowing developers to focus on coding essential components while maintaining security consistency across APIs.

The Tyk Operator brings full API management capabilities to Kubernetes, streamlining the deployment of applications and API updates within DevOps and GitOps flows. It enhances delivery velocity and stability through the use of microservices architecture and containerization. Moreover, its versioning feature allows for easy rollbacks, increasing overall system resilience.

Now, assess the simplicity of integrating these tools within the multicloud Kubernetes cluster.

 

How to install Tyk OSS Gateway on Kubernetes

According to Tyk’s documentation, the easiest way to install the Tyk OSS Gateway, also known as Tyk’s Headless Gateway, is by using Helm charts.

Start by adding the Tyk Helm repository to your local machine:

 

```shell

helm repo add tyk-helm https://helm.tyk.io/public/helm/charts/

```

 

And update the Helm repositories:

 

```shell

helm repo update

```

 

Since this is not a production setup, you’ll use the `simple-redis` chart. The following command creates the `tyk` namespace and deploys the necessary Redis components to it:

 

```shell

helm install redis tyk-helm/simple-redis --create-namespace -n tyk

```

 

The output should look like this:

 

```shell

NAME: redis

LAST DEPLOYED: Sun Jul 16 22:42:43 2023

NAMESPACE: tyk

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

1. Connect to redis: redis.tyk.svc.cluster.local:6379

```

 

Take note of the address to connect to Redis (*ie* `redis.tyk.svc.cluster.local:6379`) as you’ll use it later.

> Please note: This chart is suitable for evaluation purposes only. For production, it’s recommended to use the `bitnami/redis` chart instead.

Next, save the values used by the `tyk-headless` chart in the `values.yaml` file using the following command:

 

```shell

helm show values tyk-helm/tyk-headless > values.yaml

```

 

Edit `values.yaml`. In the `redis` section, add the following line:

 

```yaml

...

redis: # <----- Redis section

  addrs: redis.tyk.svc.cluster.local:6379 # <----- Don't forget to indent

...

```

 

Then append the following in the `secrets` section:

 

```yaml

...

secrets: # <----- Secrets section

  APISecret: YOURTYKAUTHTOKEN # <---- TYK_AUTH

  OrgID: YOURTYKORGTOKEN  # <---- TYK_ORG

...

```

 

You can use whatever values you want for these tokens. Just be sure to save them as you’ll need those secrets when deploying the Tyk Operator.

Now, install the Tyk OSS Gateway in the `tyk` namespace using the following command:

 

```shell

helm install tyk-ce tyk-helm/tyk-headless -f values.yaml -n tyk

```

 

Verify that the gateway is working by forwarding the traffic to your local machine using the following command:

 

```shell

kubectl port-forward service/gateway-svc-tyk-ce-tyk-headless 9000:443 -n tyk

```

 

Test the endpoint:

 

```shell

curl localhost:9000/hello

```

 

The output should be similar to the following:

 

```shell

{"status":"pass","version":"5.0.0","description":"Tyk GW","details":{"redis":{"status":"pass","componentType":"datastore","time":"2023-07-17T02:59:53Z"}}}

```

 

After confirming that everything works as expected, you can proceed with the next step: installing the Tyk Operator.

 

How to install the Tyk Operator

The Tyk Operator depends on cert-manager. To install it, you can use the following command:

 

```shell

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.8.0/cert-manager.yaml

```

 

Then create a new namespace for the Tyk Operator:

 

```shell

kubectl create namespace tyk-operator-system

```

 

Following best practices, the Tyk Operator uses a secret to store the authentication information used to connect with the Tyk Gateway. Create the secret using the following template:

 

```

kubectl create secret -n tyk-operator-system generic tyk-operator-conf \

  --from-literal "TYK_AUTH=YOURTYKAUTHTOKEN" \

  --from-literal "TYK_ORG=YOURTYKORGTOKEN" \

  --from-literal "TYK_MODE=ce" \

  --from-literal "TYK_URL=http://gateway-svc-tyk-ce-tyk-headless:8080" \

  --from-literal "TYK_TLS_INSECURE_SKIP_VERIFY=true"

```

 

Be sure to replace `YOURTYKAUTHTOKEN` and `YOURTYKORGTOKEN` with the values used when editing `values.yaml`.

 

When ready, install the Tyk Operator using Helm:

 

```

helm install tyk-operator tyk-helm/tyk-operator -n tyk-operator-system

```

 

The output should look like this:

 

```shell

NAME: tyk-operator

LAST DEPLOYED: Sun Jul 16 22:58:32 2023

NAMESPACE: tyk-operator-system

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

You have deployed the tyk-operator! See https://github.com/TykTechnologies/tyk-operator for more information.

```

 

Overall, by using the Tyk API Gateway and Tyk Operator, your organization can improve the management and security of APIs in multicloud Kubernetes environments.

Implement common concepts on your multicloud cluster

There are some common concepts to consider after launching a Kubernetes multicloud cluster. These concepts make it easy to manage your multicloud infrastructure by providing load balancing of traffic, a more stable connection with minimal latency, as well as powerful UIs where you can manage loads, monitor applications, and manage your APIs. Review what those concepts are subsequently.

Service discovery and load balancing

Service discovery and load balancing are pivotal aspects of a multicloud Kubernetes implementation. Each cloud provider offers its unique solution for these tasks, but unfortunately, these typically cater solely to VMs and Kubernetes clusters within their specific ecosystem. This limitation calls for a more versatile solution.

This is where the role of content delivery networks (CDNs) comes into play. CDNs like Akamai, Fastly, and Cloudflare offer dynamic traffic routing capabilities that can intelligently route traffic based on a variety of factors, including geographical location determined by IP, device type information provided by the browser, the originating network, and ping times. Instead of merely circulating traffic, these CDNs evaluate these parameters to route traffic to the node that can best serve the end user. This approach ensures better performance, reduced latency, and enhanced user experience across various cloud environments.

Networking

In a multicloud Kubernetes setup, establishing a network that spans across multiple cloud providers guarantees uninterrupted interconnectivity. Utilizing tools like Submariner or Istio is helpful as they allow clusters to operate as though they’re part of a single network, even if they’re hosted on distinct cloud ecosystems.

Deployment and management tools

It makes a lot of sense to have a tool that facilitates the deployment and management of your Kubernetes cluster. Rancher is a common choice for this because it supports all cloud providers and incorporates powerful features that facilitate identity access management (IAM) and monitoring and logging through Grafana and Prometheus.

Conclusion

In this tutorial, you learned about the benefits of a multicloud approach and learned how to set up a multicloud Kubernetes cluster and implement common concepts within this environment.

You also learned about the integral role of the Tyk API Gateway and Tyk Operator in these implementations. Tyk simplifies the management process and significantly enhances the security, scalability, and troubleshooting capabilities of multicloud Kubernetes clusters. It’s an invaluable tool, enabling more streamlined deployment of applications and easier, more efficient API updates. These benefits make Tyk a compelling choice for anyone seeking to maximize the potential of their multicloud Kubernetes deployments. Try it out for free today!