Login 24/7 Support Community tyk.io

Quick Start in Kubernetes

The tyk-k8s-demo repository allows you to start up an entire Tyk Stack with all its dependencies as well as other tools that can integrate with Tyk. The repository will spin up everything in Kubernetes using helm and bash magic to get you started.

Purpose

Minimize the amount of effort needed to start up the Tyk infrastructure and show examples of how Tyk can be setup in k8s using different deployment architectures as well as different integrations.

Getting Started

Requirements

You will need the following tools to be able to run this library.

Tested on Linux/Unix based systems on AMD64 and ARM architectures

Initial setup

Create .env file and update the appropriate fields with your licenses. If you require a trial license, you can obtain one here. If you are looking to use the tyk-gateway deployment only you will not require any licensing as that is the open-source deployment.

git clone https://github.com/TykTechnologies/tyk-k8s-demo.git
cd tyk-k8s-demo
cp .env.example .env

Depending on the deployments you would like install set values of the LICENSE, MDCB_LICENSE and PORTAL_LICENSE inside the .env file.

Minikube

If you are deploying this on Minikube, you will need to enable the ingress addon. You can do so by running the following commands:

minikube start
minikube addons enable ingress

Quick Start

./up.sh --deployments portal,operator-httpbin tyk-stack

This quick start command will stard up the entire Tyk stack along with the Tyk Enterprise Portal, Tyk Operator and httpbin CRD example.

Possible deployments

  • tyk-stack: Tyk single region self-managed deployment
  • tyk-cp: Tyk self-managed multi region control plane (Will be available in Q1 2024 with new Tyk Charts. Please use the v2 tag if you would like to deploy this)
  • tyk-dp: Tyk self-managed data plane, this can connect to Tyk Cloud or a Tyk Control Plane
  • tyk-gateway: Tyk oss self-managed single region

Dependencies Options

Redis Options

  • redis: Bitnami Redis deployment
  • redis-cluster: Bitnami Redis Cluster deployment
  • redis-sentinel: Bitnami Redis Sentinel deployment

Storage Options

Supplementary Deployments

Please see this page for Tyk deployments compatibility charts.

  • cert-manager: Deploys cert-manager.
  • datadog: Deploys Datadog agent and starts up Tyk Pump to push analytics data from the Tyk platform to Datadog. It will also create a Datadog dashboard for you to view the analytics.
  • elasticsearch: deploys Elasticsearch and starts up Tyk pump to push analytics data from the Tyk platform to Elasticsearch.
    • elasticsearch-kibana: deploys an Elasticsearch and Kibana deployment. A Kibana dashboard will be created for you to view the analytics.
  • Jaeger: deploys the Jaeger operator, a Jaeger instance and the OpenTelemetry collector. Configures the Tyk deployment to send telemetry data to Jaeger through the OpenTelemetry collector.
  • k6: deploys a Grafana K6 Operator.
    • k6-slo-traffic: deploys a k6 CRD to generate a load of traffic to seed analytics data.
  • keycloak: deploys the Keycloak Operator and a Keycloak instance.
    • keycloak-dcr: starts up a Keycloak Dynamic Client Registration example.
    • keycloak-jwt: starts up a Keycloak JWT Authentication example with Tyk.
    • keycloak-sso: starts up a Keycloak SSO example with the Tyk dashboard.
  • operator: deploys the Tyk Operator and its dependency cert-manager.
    • operator-federation: starts up a Federation v1 API examples using the tyk-operator.
    • operator-graphql: starts up a GraphQL API examples using the tyk-operator.
    • operator-httpbin: starts up an API examples using the tyk-operator.
    • operator-jwt-hmac: starts up an API examples using the tyk-operator to demonstrate JWT HMAC auth.
    • operator-udg: starts up a Universal Data Graph API example using the tyk-operator.
  • portal: deploys the Tyk Enterprise Developer Portal as well as its dependency PostgreSQL.
  • prometheus: deploys Prometheus and starts up Tyk pPump to push analytics data from the Tyk platform to Prometheus.
    • prometheus-grafana: deploys the Prometheus and Grafana deployments. Creates a Grafana dashboard for you to view the analytics.
  • vault: deploys Vault Operator and a Vault instance.

If you are running a POC and would like an example of how to integrate a specific tool, please submit a request through the repository here.

Example

./up.sh \
  --storage postgres \
  --deployments prometheus-grafana,k6-slo-traffic \
  tyk-stack

The deployment will take 10 minutes, as the installation is sequential and the dependencies require a bit of time before they are started up. Once the installation is complete, the script will print out a list of all the services that were started and how those can be accessed. The k6 job will start running after the script is finished and will run in the background for 15 minutes to generate traffic over that period of time. To visualize the live traffic, you can use the credentials provided by the script to access Grafana or the Tyk Dashboard.

Usage

Start Tyk deployment

Create and start up the deployments

Usage:
./up.sh [flags] [command]

Available Commands:
tyk-stack
tyk-cp
tyk-dp
tyk-gateway

Flags:
-v, --verbose         bool     set log level to debug
--dry-run         bool     set the execution mode to dry run. This will dump the kubectl and helm commands rather than execute them
-n, --namespace       string   namespace the tyk stack will be installed in, defaults to 'tyk'
-f, --flavor          enum     k8s environment flavor. This option can be set 'openshift' and defaults to 'vanilla'
-e, --expose          enum     set this option to 'port-forward' to expose the services as port-forwards or to 'load-balancer' to expose the services as load balancers or 'ingress' which exposes services as a k8s ingress object
-r, --redis           enum     the redis mode that tyk stack will use. This option can be set 'redis', 'redis-sentinel' and defaults to 'redis-cluster'
-s, --storage         enum     database the tyk stack will use. This option can be set 'postgres' and defaults to 'mongo'
-d, --deployments     string   comma separated list of deployments to launch
-c, --cloud           enum     stand up k8s infrastructure in 'aws', 'gcp' or 'azure'. This will require Terraform and the CLIs associate with the cloud of choice
-l, --ssl             bool     enable ssl on deployments

Stop Tyk deployment

Shutdown deployment

Usage:
  ./down.sh [flags]

Flags:
  -v, --verbose         bool     set log level to debug
  -n, --namespace       string   namespace the tyk stack will be installed in, defaults to 'tyk'
  -p, --ports           bool     disconnect port connections only
  -c, --cloud           enum     tear down k8s cluster stood up

Clusters

You can get the library to create demo clusters for you on AWS, GCP, or Azure. That can be set using the --cloud flag and requires the respective cloud CLI to be installed and authorized on your system. You will also need to specify the CLUSTER_LOCATION, CLUSTER_MACHINE_TYPE, CLUSTER_NODE_COUNT and GCP_PROJECT (for GCP only) parameters in the .env file.

You can find examples of .env files here:

For more information about cloud CLIs:

Customization

This library can also act as a guide to help you get set up with Tyk. If you just want to know how to set up a specific tool with Tyk, you can run the library with the --dry-run and --verbose flags. This will output all the commands that the library will run to stand up any installation. This can be helpful for debugging as well as figuring out what configuration options are required to set these tools up.

Furthermore, you can also add any Tyk environment variables to your .env file and those variables will be mapped to their respective Tyk deployments.

Example:

...
TYK_MDCB_SYNCWORKER_ENABLED=true
TYK_MDCB_SYNCWORKER_HASHKEYS=true
TYK_GW_SLAVEOPTIONS_SYNCHRONISERENABLED=true

Variables

The script has defaults for minimal settings in this env file, and it will give errors if something is missing. You can also add or change any Tyk environments variables in the .env file, and they will be mapped to the respective extraEnvs section in the helm charts.

Variable Default Comments
DASHBOARD_VERSION v5.2 Dashboard version
GATEWAY_VERSION v5.2 Gateway version
MDCB_VERSION v2.3 MDCB version
PUMP_VERSION v1.8 Pump version
PORTAL_VERSION v1.8 Portal version
TYK_HELM_CHART_PATH tyk-helm Path to charts, can be a local directory or a helm repo
TYK_USERNAME [email protected] Default password for all the services deployed
TYK_PASSWORD topsecretpassword Default password for all the services deployed
LICENSE Dashboard license
MDCB_LICENSE MDCB license
PORTAL_LICENSE Portal license
TYK_WORKER_CONNECTIONSTRING MDCB URL for worker connection
TYK_WORKER_ORGID Org ID of dashboard user
TYK_WORKER_AUTHTOKEN Auth token of dashboard user
TYK_WORKER_USESSL true Set to true when the MDCB is serving on a TLS connection
TYK_WORKER_SHARDING_ENABLED false Set to true to enable API Sharding
TYK_WORKER_SHARDING_TAGS API Gateway segmentation tags
TYK_WORKER_GW_PORT 8081 Set the gateway service port to use
TYK_WORKER_OPERATOR_CONNECTIONSTRING Set the dashboard URL for the operator to be able manage APIs and Policies
DATADOG_APIKEY Datadog API key
DATADOG_APPKEY Datadog Application key. This is used to create a dashboard and create a pipeline for the Tyk logs
DATADOG_SITE datadoghq.com Datadog site. Change to datadoghq.eu if using the european site
GCP_PROJECT The GCP project for terraform authentication on GCP
CLUSTER_LOCATION Cluster location that will be created on AKS, EKS, or GKE
CLUSTER_MACHINE_TYPE Machine type for the cluster that will be created on AKS, EKS, or GKE
CLUSTER_NODE_COUNT Number of nodes for the cluster that will be created on AKS, EKS, or GKE
INGRESS_CLASSNAME nginx The ingress classname to be used to associate the k8s ingress objects with the ingress controller/load balancer