Deploy Tyk Self Managed using new Helm Chart
New Tyk Helm Charts (Beta)
Tyk is working to provide a new set of helm charts, and will progressively roll them out at tyk-charts. It will provide component charts for all Tyk Components, as well as umbrella charts as reference configurations for open source and self-managed users.
Warning
The new Helm Charts are in beta stage. Breaking changes may be introduced before stable release.
To deploy Tyk Self Managed (for single data center) using the new helm chart, please use tyk-single-dc chart.
Tyk Self Managed (Single Data Center)
tyk-single-dc
provides the default deployment of Tyk Self Managed on single data center. It will deploy all required Tyk components with the settings provided in the values.yaml file.
It includes:
- Tyk Gateway, an open source Enterprise API Gateway (supporting REST, GraphQL, TCP and gRPC protocols)
- Tyk Pump, an analytics purger that moves the data generated by your Tyk nodes to any back-end. Furthermore, it has all the required modifications to easily connect to Tyk Cloud or Multi Data Center (MDCB) control plane.
- Tyk Dashboard, a license based component that provides GUI management interface and analytics platform for Tyk
Introduction
By default, this chart installs the following components as sub-charts on a Kubernetes cluster using the Helm package manager.
Component | Enabled by Default | Flag |
---|---|---|
Tyk Gateway | true | n/a |
Tyk Pump | true | global.components.pump |
Tyk Dashboard | true | global.components.dashboard |
To enable or disable each component, change the corresponding enabled flag.
Also, you can set the version of each component through image.tag
. You could find the list of version tags available from Docker hub.
Prerequisites
- Kuberentes 1.19+
- Helm 3+
- Redis should already be installed or accessible by the gateway.
Installing The Chart
To install the chart from Helm repository in namespace tyk
with the release name tyk-single-dc
:
helm repo add tyk-helm https://helm.tyk.io/public/helm/charts/
helm repo update
helm show values tyk-helm/tyk-single-dc > values-single-dc.yaml --devel
*If you use the Bitnami chart for Redis installation, the DNS name of your Redis as set by Bitnami is tyk-redis-master.tyk.svc.cluster.local:6379
.
You can update them in your local values-single-dc.yaml
file under global.redis.addr
and global.redis.pass
.
Alternatively, you can use --set
flag to set it in Tyk installation. For example --set global.redis.pass=$REDIS_PASSWORD
*All the values above are just examples, please input the values specific for your deployment.
Then just run:
helm install tyk-single-dc tyk-helm/tyk-single-dc -n tyk --create-namespace -f values-single-dc.yaml --devel
Uninstalling The Chart
helm uninstall tyk-single-dc -n tyk
This removes all the Kubernetes components associated with the chart and deletes the release.
Upgrading Chart
helm upgrade tyk-single-dc tyk-helm/tyk-single-dc -n tyk --devel
Note: Upgrading from tyk-pro chart
If you were using tyk-pro
chart for existing release, you cannot upgrade directly. Please modify the values.yaml base on your requirements and install using the new tyk-single-dc
chart.
Configuration
To get all configurable options with detailed comments:
helm show values tyk-helm/tyk-single-dc > values-single-dc.yaml --devel
You can update any value in your local values.yaml
file and use -f [filename]
flag to override default values during installation.
Alternatively, you can use --set
flag to set it in Tyk installation. See Using Helm for examples.
Set Redis Connection Details (Required)
Tyk uses Redis for distributed rate-limiting and token storage. You may set global.redis.addr
and global.redis.pass
with redis connection
string and password respectively.
If you do not already have Redis installed, you may use these charts provided by Bitnami
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install tyk-redis bitnami/redis -n tyk --create-namespace --set image.tag=6.2.13
Follow the notes from the installation output to get connection details and password. The DNS name of your Redis as set by Bitnami is
tyk-redis-master.tyk.svc.cluster.local:6379
(Tyk needs the name including the port)
Set Mongo or PostgresSQL Connection Details (Required)
If you have already installed Mongo/PostgresSQL, you can set the connection details in global.mongo
and global.postgres
section of values file respectively.
If not, you can use these rather excellent charts provided by Bitnami to install mongo/postgres:
Mongo Installation
helm install tyk-mongo bitnami/mongodb --version {HELM_CHART_VERSION} --set "replicaSet.enabled=true" -n tyk
PostgresSQL Installation
helm install tyk-postgres bitnami/postgresql --set "auth.database=tyk_analytics" -n tyk
Follow the notes from the installation output to get connection details.
NOTE: Please make sure you are installing Mongo/Postgres versions that are supported by Tyk. Please refer to Tyk docs to get list of supported versions.
Gateway Configurations
Configure below inside tyk-gateway
section.
Enabling TLS
We have provided an easy way of enabling TLS via the global.tls.gateway
flag. Setting this value to true will
automatically enable TLS using the certificate provided under tyk-gateway/certs/cert.pem.
If you want to use your own key/cert pair, you must follow the following steps:
- Create a TLS secret using your cert and key pair.
- Set
global.tls.gateway
to true. - Set
gateway.tls.useDefaultTykCertificate
to false. - Set
gateway.tls.secretName
to the name of the newly created secret.
Pump Configurations
To enable Pump, set global.components.pump
to true, and configure below inside tyk-pump
section.
Pump | Configuration |
---|---|
Prometheus Pump (Default) | Add prometheus to pump.backend , and add connection details for prometheus under pump.prometheusPump . |
Mongo Pump | Add mongo to pump.backend , and add connection details for mongo under .mongo . |
SQL Pump | Add postgres to pump.backend , and add connection details for postgres under .postgres . |
Uptime Pump | Set pump.uptimePumpBackend to 'mongo' or 'postgres' or '' |
Other Pumps | Add the required environment variables in pump.extraEnvs |
Prometheus Pump
Add prometheus
to pump.backend
, and add connection details for prometheus under pump.prometheusPump
.
We also support monitoring using Prometheus Operator. All you have to do is set pump.prometheusPump.prometheusOperator.enabled
to true.
This will create a PodMonitor resource for your Pump instance.
# prometheusPump configures Tyk Pump to expose Prometheus metrics.
# Please add "prometheus" to .Values.pump.backend in order to enable Prometheus Pump.
prometheusPump:
# host represents the host without port, where Tyk Pump serve the metrics for Prometheus.
host: ""
# port represents the port where Tyk Pump serve the metrics for Prometheus.
port: 9090
# path represents the path to the Prometheus collection. For example /metrics.
path: /metrics
# customMetrics allows defining custom Prometheus metrics for Tyk Pump.
# It accepts a string that represents a JSON object. For instance,
#
# customMetrics: '[{"name":"tyk_http_requests_total","description":"Total of API requests","metric_type":"counter","labels":["response_code","api_name","method","api_key","alias","path"]}, { "name":"tyk_http_latency", "description":"Latency of API requests", "metric_type":"histogram", "labels":["type","response_code","api_name","method","api_key","alias","path"] }]'
customMetrics: ""
# If you are using prometheus Operator, set the fields in the section below.
prometheusOperator:
# enabled determines whether the Prometheus Operator is in use or not. By default,
# it is disabled.
# Tyk Pump can be monitored with PodMonitor Custom Resource of Prometheus Operator.
# If enabled, PodMonitor resource is created based on .Values.pump.prometheusPump.prometheusOperator.podMonitorSelector
# for Tyk Pump.
enabled: false
# podMonitorSelector represents a podMonitorSelector of your Prometheus resource. So that
# your Prometheus resource can select PodMonitor objects based on selector defined here.
# Please set this field to the podMonitorSelector field of your monitoring.coreos.com/v1
# Prometheus resource's spec.
#
# You can check the podMonitorSelector via:
# kubectl describe prometheuses.monitoring.coreos.com <PROMETHEUS_POD>
podMonitorSelector:
release: prometheus-stack
Mongo pump
If you are using the MongoDB pumps in the tyk-oss installation you will require MongoDB installed for that as well.
To install Mongo you can use these rather excellent charts provided by Bitnami:
helm install tyk-mongo bitnami/mongodb --version {HELM_CHART_VERSION} --set "replicaSet.enabled=true" -n tyk
(follow notes from the installation output to get connection details and update them in values.yaml
file)
NOTE: Here is list of supported MongoDB versions. Please make sure you are installing mongo helm chart that matches these versions.
Important Note regarding MongoDB: This helm chart enables the PodDisruptionBudget for MongoDB with an arbiter replica-count of 1. If you intend to perform system maintenance on the node where the MongoDB pod is running and this maintenance requires for the node to be drained, this action will be prevented due the replica count being 1. Increase the replica count in the helm chart deployment to a minimum of 2 to remedy this issue.
# Set mongo connection details if you want to configure mongo pump.
mongo:
# The mongoURL value will allow you to set your MongoDB address.
# Default value: mongodb://mongo.{{ .Release.Namespace }}.svc.cluster.local:27017/tyk_analytics
# mongoURL: mongodb://mongo.tyk.svc.cluster.local:27017/tyk_analytics
# If your MongoDB has a password you can add the username and password to the url
# mongoURL: mongodb://root:[email protected]:27017/tyk_analytics?authSource=admin
mongoURL: <MongoDB address>
# Enables SSL for MongoDB connection. MongoDB instance will have to support that.
# Default value: false
# useSSL: false
SQL pump
If you are using the SQL pumps in the tyk-oss installation you will require PostgreSQL installed for that as well.
To install PostgreSQL you can use these rather excellent charts provided by Bitnami:
helm install tyk-postgres bitnami/postgresql --set "auth.database=tyk_analytics" -n tyk
(follow notes from the installation output to get connection details and update them in values.yaml
file)
# Set postgres connection details if you want to configure postgres pump.
# Postgres connection string parameters.
postgres:
host: tyk-postgres-postgresql.tyk.svc.cluster.local
port: 5432
user: postgres
password:
database: tyk_analytics
sslmode: disable
Uptime Pump
Uptime Pump can be configured by setting pump.uptimePumpBackend
in values.yaml file. It support following values
- mongo: Used to set mongo pump for uptime analytics. Mongo Pump should be enabled.
- postgres: Used to set postgres pump for uptime analytics. Postgres Pump should be enabled.
- empty: Used to disable uptime analytics.
# uptimePumpBackend configures uptime Tyk Pump. ["", "mongo", "postgres"].
# Set it to "" for disabling uptime Tyk Pump. By default, uptime pump is disabled.
uptimePumpBackend: ""
Other Pumps
To setup other backends for pump, refer to this document and add the required environment variables in pump.extraEnvs
Dashboard
The Tyk Dashboard can be configured by modifying the values under “tyk-dashboard” section of the values.yaml file The chart is provided with sane defaults such that the only hard requirement is the license which needs to be put under .Values.global.license.dashboard in order for the bootstrapping process to work.
tyk-dashboard:
dashboard:
enableOwnership: true
defaultPageSize: 10
notifyOnChange: true
hashKeys: true
enableDuplicateSlugs: true
showOrgId: true
hostConfig:
enableHostNames: true
disableOrgSlugPrefix: true
overrideHostname: "dashboard-svc-tyk-pro.tyk.svc.cluster.local"
homeDir: "/opt/tyk-dashboard"
useShardedAnalytics: false
enableAggregateLookups: true
enableAnalyticsCache: true
allowExplicitPolicyId: true
oauthRedirectUriSeparator: ";"
keyRequestFields: "appName;appType"
dashboardSessionLifetime: 43200
ssoEnableUserLookup: true
notificationsListenPort: 5000
enableDeleteKeyByHash: true
enableUpdateKeyByHash: true
enableHashedKeysListing: true
enableMultiOrgUsers: true
enableIstioIngress: false
replicaCount: 1
image:
repository: tykio/tyk-dashboard
tag: v5.0.0
pullPolicy: Always
service:
type: NodePort
externalTrafficPolicy: Local
annotations: {}
resources: {}
# We usually recommend not to specify default resources and to leave this
# as a conscious choice for the user. This also increases chances charts
# run on environments with little resources, such as Minikube. If you do
# want to specify resources, uncomment the following lines, adjust them
# as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
securityContext:
runAsUser: 1000
fsGroup: 2000
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: []
## extraVolumes A list of volumes to be added to the pod
## extraVolumes:
## - name: ca-certs
## secret:
## defaultMode: 420
## secretName: ca-certs
extraVolumes: []
## extraVolumeMounts A list of volume mounts to be added to the pod
## extraVolumeMounts:
## - name: ca-certs
## mountPath: /etc/ssl/certs/ca-certs.crt
## readOnly: true
extraVolumeMounts: []
mounts: []
# Dashboard will only bootstrap if the master bootstrap option is set to true.
bootstrap: true
# The hostname to bind the Dashboard to.
hostName: tyk-dashboard.local
# If set to true the Dashboard will use SSL connection.
# You will also need to set the:
# - TYK_DB_SERVEROPTIONS_CERTIFICATE_CERTFILE
# - TYK_DB_SERVEROPTIONS_CERTIFICATE_KEYFILE
# variables in extraEnvs object array to define your SSL cert and key files.
tls: false
# Dashboard admin information.
adminUser:
firstName: admin
lastName: user
email: [email protected]
# Set a password or a random one will be assigned.
password: "123456"
# Dashboard Organisation information.