Login 24/7 Support Community tyk.io

Optimising the Cache Storage

Tyk creates the API cache in Redis, as it gives high performance and low latency. By default, the cache will use the same database that is used to store the API keys, minimising the deployment footprint.

For multi-data centre deployments, the Data Planes have a locally deployed Redis. This enables them to have a localised cache close to the traffic-serving Gateways.

The cache key is used as the Redis key, for quick lookups.

For high-traffic systems that make heavy use of caching, it can make sense to use separate Redis databases for cache storage and for API keys, at the expense of increased deployment footprint.

Configuring a separate cache

To enable a separate cache server, you must deploy additional Redis instance(s) and apply additional configuration within your Tyk Gateway’s tyk.conf configuration file.

You must

  • set enable_separate_cache_store to true
  • provide additional Redis connection information in the cache_storage section

For example:

{
"enable_separate_cache_store": true,
"cache_storage": {
  "type": "redis",
  "host": "",
  "port": 0,
  "addrs": [
      "localhost:6379"
  ],
  "username": "",
  "password": "",
  "database": 0,
  "optimisation_max_idle": 3000,
  "optimisation_max_active": 5000,
  "enable_cluster": false
  }
}

The configuration of the separate Redis Cache is the same (and uses the same underlying driver) as the regular configuration, so Redis Cluster is fully supported. If you set enable_cluster to false, you only need to set one entry in addrs.

Note

Prior to Tyk Gateway v2.9.3, hosts was used instead of addrs; since v2.9.3 hosts has been deprecated.