HomeTyk Open Source API Gateway v2.xDeploy to Production

Deploy to Production

So you want to deploy Tyk to production?

There’s a few things worth noting that can ensure your the performance of your Tyk Gateway nodes. Here’s some of the basic things we do for load testing to make sure machines don’t run out of resources.

What to expect

Tyk is pretty high performance, with the optimisations below, and our new rate limiter, we can get a single 2-core/2GB Digital Ocean Gateway node to easily handle ~1,000 requests per second with a test key set of 20 API tokens.

In the results below, Tyk is evaluating each request through it’s access control list, rate limiter, quota evaluator, and analytics recorder across 20 test tokens (randomly assigned for each request) and still retains a latency firmly under 30 seconds:

Tyk 2.0 performance

A bigger test set

Stretching this same test with a larger test set against 2000 users in 3 minutes we see the same solid performance, with responsiveness starting to leave the ~30ms latency level at ~1,500 requests per second:

Tyk 2.0 performance

(These tests were produced with Blitz.io with a test running form 0 to 1000 users and 0 to 2000 in 2 minutes respectively with 20 sample access tokens), the Redis DB was not running on the same server

Change all the shared secrets

Tyk uses many shared secrets between services, and some of these are defaults in the configurations files, make sure that these are changed before deploying to production.

Split out your DB

This is a no-brainer, but keep Redis and Mongo off the system running the gateway, they both use lots of RAM, and with Redis and the Gateway constantly communicating you will be facing resource contention on the CPU for a marginal decrease in latency.

So in our setup, we recommend that Redis and Mongo live on their own systems, separate from your tyk gateway. If you like, run them together on the same box, that’s up to you.

The network topology we like to use is:

  • Two Tyk Gateway nodes (load balanced)
  • A separate mongoDB cluster
  • A separate Redis server with fail-over or cluster
  • One Tyk dashboard node with it’s own local gateway process
  • One tyk purger node that handles data transitions

Make sure you have enough Redis connections

Tyk makes heavy use of Redis in order to provide a fast and reliable service, in order to do so effectively, it keeps a passive connection pool ready. For high-performance setups, this pool needs to be expanded to handle more simultaneous connections, otherwise you may run out of redis connections.

Tyk also lets you set a maximum number of open connections, so that you don’t over-commit connections to the server.

To set you maximums and minimums, edit your tyk.conf and tyk_analytics.conf files to include:

"storage": {
    "optimisation_max_idle": 2000,
    "optimisation_max_active": 4000,

Set the max_idle value to something large, we usually leave it around 2000 for HA deployments, and then set your max_active to your upper limit (as in, how many additional connections over the idle pool should be used)

Health checks are expensive

In order to keep real-time health-check data and make it available to the Health-check API, Tyk needs to record information for every request, in a rolling window – this is an expensive operation and can limit throughput – you have two options: switch it off, or get a box with more cores.

Use the optimisation settings

Tyk has an asynchronous rate limiter that can provide a smoother performance curve to the default transaction based one, this rate limiter will be switched on by default in future versions.

To enable this rate limiter, make sure the settings below are set in your tyk.con:

"close_connections": true,
"enforce_org_quotas": false, // only true if multi-tenant
"enforce_org_data_detail_logging": false,
"experimental_process_org_off_thread": true,
"enable_non_transactional_rate_limiter": true,
"enable_sentinel_rate_limiter": false,
"local_session_cache": {
    "disable_cached_session_state": false

The above settings will ensure connections are closed (no TCP re-use), removes a transaction form the middleware run that enfoces org-level rules, enables the new rate limiter and sets Tyk up to use an in-memory cache for session-state data to save a round-trip to Redis for some other transactions.

Use the right hardware

Tyk is CPU-bound, you will get exponentially better performance the more cores you throw at Tyk. It’s that simple. Tyk will automatically spread itself across all cores to handle traffic, but if expensive ops like health-checks are enabled, then those can cause keyspace contention, so again, while it helps, health-checks do throttle throughput.

Resource limits

Make sure your system has resource limits set to handle lots of inbound traffic.

File handles

One thing that happens to systems that are under heavy strain is the likelihood of running out of file descriptors, so we need to make sure that this is set up properly.

Set the global file limits like this:

In the file /etc/sysctl.conf add:


For the file: /etc/security/limits.conf, add:

* soft nproc 80000
* hard nproc 80000
* soft nofile 80000
* hard nofile 80000
root soft nproc 80000
root hard nproc 80000
root soft nofile 80000
root hard nofile 80000

The above is a catch-all! ON some systems and init systems the wildcard limit doesn’t always work.

Ubuntu and Upstart

If you are using Upstart, then you’ll need to set the file handle limits in the init script (/etc/init/tyk-gateway.conf):

limit nofile 50000 80000

TCP connection recycling

Use this at your own peril – it’s not always recommended and you need to be sure it fits your use case, but you can squeeze a bit more performance out of a Gateway install by running this on the command line:

sysctl -w net.ipv4.tcp_tw_recycle=1

Careful with it though, it could lead to unterminated connections.

Was this article helpful to you? Yes No