The Complete Kubernetes Tyk Demo

Last updated: 8 minutes read.

The tyk-k8s-demo repository allows you to start up an entire Tyk Stack with all its dependencies as well as other tools that can integrate with Tyk. The repository will spin up everything in Kubernetes using helm and bash magic to get you started.

Purpose

Minimize the amount of effort needed to start up the Tyk infrastructure and show examples of how Tyk can be set up in k8s using different deployment architectures as well as different integrations.

Prerequisites

Required Packages

You will need the following tools to be able to run this project.

  • Kubectl - CLI tool for controlling Kubernetes clusters
  • Helm - Helps manage Kubernetes applications through Helm charts
  • jq - CLI for working with JSON output and manipulating it
  • git - CLI used to obtain the project from GitHub
  • Terraform (only when using --cloud flag)

Tested on Linux/Unix based systems on AMD64 and ARM architectures

License Requirements

  • Tyk OSS: No license required as it is open-source.

  • Licensed Products: Sign up here using the button below, and choose “Get in touch” to receive a guided evaluation of the Tyk Dashboard and your temporary license. Get started

How to use the license key

Once you obtained the license key, create a .env file using the example provided and update it with your licenses as follows:

git clone https://github.com/TykTechnologies/tyk-k8s-demo.git
cd tyk-k8s-demo
cp .env.example .env

Depending on the deployments you would like to install set values of the LICENSE, MDCB_LICENSE, and PORTAL_LICENSE inside the .env file.

Minikube

If you are deploying this demo on Minikube, you will need to enable the ingress addon. You can do so by running the following commands:

minikube start
minikube addons enable ingress

Quick Start

./up.sh --deployments portal,operator-httpbin tyk-stack

This quick start command will start up the entire Tyk stack along with the Tyk Enterprise Portal, Tyk Operator, and httpbin CRD example.

Possible deployments

  • tyk-stack: A comprehensive Tyk Self Managed setup for a single region
  • tyk-cp: Tyk control plane in a multi-region Tyk deployment
  • tyk-dp: Data plane of hybrid gateways that connect to either Tyk Cloud or a Tyk Control Plane, facilitating scalable deployments
  • tyk-gateway: Open Source Software (OSS) version of Tyk, self-managed and suitable for single-region deployments

Dependencies Options

Redis Options

  • redis: Bitnami Redis deployment
  • redis-cluster: Bitnami Redis Cluster deployment
  • redis-sentinel: Bitnami Redis Sentinel deployment

Storage Options

Supplementary Deployments

Please see this page for Tyk deployments compatibility charts.

  • cert-manager: deploys cert-manager.
  • datadog: deploys Datadog agent and starts up Tyk Pump to push analytics data from the Tyk platform to Datadog. It will also create a Datadog dashboard for you to view the analytics.
  • elasticsearch: deploys Elasticsearch and starts up Tyk pump to push analytics data from the Tyk platform to Elasticsearch.
    • elasticsearch-kibana: deploys the Elasticsearch deployment as well as a Kibana deployment and creates a Kibana dashboard for you to view the analytics.
  • Jaeger: deploys the Jaeger operator, a Jaeger instance, and the OpenTelemetry collector and configures the Tyk deployment to send telemetry data to Jaeger through the OpenTelemetry collector.
  • k6: deploys a Grafana K6 Operator.
    • k6-slo-traffic: deploys a k6 CRD to generate a load of traffic to seed analytics data.
  • keycloak: deploys the Keycloak Operator and a Keycloak instance.
    • keycloak-dcr: starts up a Keycloak Dynamic Client Registration example.
    • keycloak-jwt: starts up a Keycloak JWT Authentication example with Tyk.
    • keycloak-sso: starts up a Keycloak SSO example with the Tyk Dashboard.
  • newrelic: deploys New Relic and starts up a Tyk Pump to push analytics data from the Tyk platform to New Relic.
  • opa: enables Open Policy Agent to allow for Dashboard APIs governance.
  • opensearch: deploys OpenSearch and starts up Tyk Pump to push analytics data from the Tyk platform to OpenSearch.
  • operator: deploys the Tyk Operator and its dependency cert-manager.
    • operator-federation: starts up Federation v1 API examples using the tyk-operator.
    • operator-graphql: starts up GraphQL API examples using the tyk-operator.
    • operator-httpbin: starts up an API examples using the tyk-operator.
    • operator-jwt-hmac: starts up API examples using the tyk-operator to demonstrate JWT HMAC auth.
    • operator-udg: starts up Universal Data Graph API examples using the tyk-operator.
  • portal: deploys the Tyk Enterprise Developer Portal as well as its dependency PostgreSQL.
  • prometheus: deploys Prometheus and starts up Tyk Pump to push analytics data from the Tyk platform to Prometheus.
    • prometheus-grafana: deploys the Prometheus deployment as well as a Grafana deployment and creates a Grafana dashboard for you to view the analytics.
  • vault: deploys Vault Operator and a Vault instance.

If you are running a POC and would like an example of how to integrate a specific tool, you are welcome to submit a feature request

Example

./up.sh \
  --storage postgres \
  --deployments prometheus-grafana,k6-slo-traffic \
  tyk-stack

The deployment process takes approximately 10 minutes, as the installation is sequential and some dependencies take time to initialize. Once the installation is complete, the script will output a list of all the services that were started, along with instructions on how to access them. Afterward, the k6 job will begin running in the background, generating traffic for 15 minutes. To monitor live traffic, you can use the credentials provided by the script to access Grafana or the Tyk Dashboard

Usage

Start Tyk deployment

Create and start up the deployments

Usage:
  ./up.sh [flags] [command]

Available Commands:
  tyk-stack
  tyk-cp
  tyk-dp
  tyk-gateway

Flags:
  -v, --verbose         bool     set log level to debug
      --dry-run         bool     set the execution mode to dry run. This will dump the kubectl and helm commands rather than execute them
  -n, --namespace       string   namespace the tyk stack will be installed in, defaults to 'tyk'
  -f, --flavor          enum     k8s environment flavor. This option can be set 'openshift' and defaults to 'vanilla'
  -e, --expose          enum     set this option to 'port-forward' to expose the services as port-forwards or to 'load-balancer' to expose the services as load balancers or 'ingress' which exposes services as a k8s ingress object
  -r, --redis           enum     the redis mode that tyk stack will use. This option can be set 'redis', 'redis-sentinel' and defaults to 'redis-cluster'
  -s, --storage         enum     database the tyk stack will use. This option can be set 'mongo' (amd only) and defaults to 'postgres'
  -d, --deployments     string   comma separated list of deployments to launch
  -c, --cloud           enum     stand up k8s infrastructure in 'aws', 'gcp' or 'azure'. This will require Terraform and the CLIs associate with the cloud of choice
  -l, --ssl             bool     enable ssl on deployments

Stop Tyk deployment

Shutdown deployment

Usage:
  ./down.sh [flags]

Flags:
  -v, --verbose         bool     set log level to debug
  -n, --namespace       string   namespace the tyk stack will be installed in, defaults to 'tyk'
  -p, --ports           bool     disconnect port connections only
  -c, --cloud           enum     tear down k8s cluster stood up

Clusters

You can get the repository to create demo clusters for you on AWS, GCP, or Azure. That can be set using the --cloud flag and requires the respective cloud CLI to be installed and authorized on your system. You will also need to specify the CLUSTER_LOCATION, CLUSTER_MACHINE_TYPE, CLUSTER_NODE_COUNT, and GCP_PROJECT (for GCP only) parameters in the .env file.

You can find examples of .env files here:

For more information about cloud CLIs:

Customization

This repository can also act as a guide to help you get set up with Tyk. If you just want to know how to set up a specific tool with Tyk, you can run the repository with the --dry-run and --verbose flags. This will output all the commands that the repository will run to stand up any installation. This can help debug as well as figure out what configuration options are required to set these tools up.

Furthermore, you can also add any Tyk environment variables to your .env file and those variables will be mapped to their respective Tyk deployments.

Example:

...
TYK_MDCB_SYNCWORKER_ENABLED=true
TYK_MDCB_SYNCWORKER_HASHKEYS=true
TYK_GW_SLAVEOPTIONS_SYNCHRONISERENABLED=true

Variables

The script has defaults for minimal settings in this env file, and it will give errors if something is missing. You can also add or change any Tyk environment variables in the .env file, and they will be mapped to the respective extraEnvs section in the helm charts.

Variable Default Comments
DASHBOARD_VERSION v5.5 Dashboard version
GATEWAY_VERSION v5.5 Gateway version
MDCB_VERSION v2.7 MDCB version
PUMP_VERSION v1.11 Pump version
PORTAL_VERSION v1.10 Portal version
TYK_HELM_CHART_PATH tyk-helm Path to charts, can be a local directory or a helm repo
TYK_USERNAME [email protected] Default password for all the services deployed
TYK_PASSWORD topsecretpassword Default password for all the services deployed
LICENSE Dashboard license
MDCB_LICENSE MDCB license
PORTAL_LICENSE Portal license
TYK_WORKER_CONNECTIONSTRING MDCB URL for worker connection
TYK_WORKER_ORGID Org ID of dashboard user
TYK_WORKER_AUTHTOKEN Auth token of dashboard user
TYK_WORKER_USESSL true Set to true when the MDCB is serving on a TLS connection
TYK_WORKER_SHARDING_ENABLED false Set to true to enable API Sharding
TYK_WORKER_SHARDING_TAGS API Gateway segmentation tags
TYK_WORKER_GW_PORT 8081 Set the gateway service port to use
TYK_WORKER_OPERATOR_CONNECTIONSTRING Set the dashboard URL for the operator to be able to manage APIs and Policies
DATADOG_APIKEY Datadog API key
DATADOG_APPKEY Datadog Application key. This is used to create a dashboard and create a pipeline for the Tyk logs
DATADOG_SITE datadoghq.com Datadog site. Change to datadoghq.eu if using the European site
GCP_PROJECT The GCP project for terraform authentication on GCP
CLUSTER_LOCATION Cluster location that will be created on AKS, EKS, or GKE
CLUSTER_MACHINE_TYPE Machine type for the cluster that will be created on AKS, EKS, or GKE
CLUSTER_NODE_COUNT Number of nodes for the cluster that will be created on AKS, EKS, or GKE
INGRESS_CLASSNAME nginx The ingress classname to be used to associate the k8s ingress objects with the ingress controller/load balancer