Tyk v2.2 Documentation Components

Rate Limiting

Also known as throttling, Tyk API will actively only allow a key to make x requests per y time period. This is very useful if you want to ensure your API does not get flooded with requests.

How do rate limits work?

There are two rate limiters in Tyk as of v2.3: The hard-synchronised rate limiter (the rate limiter used in v2.2) and the distributed rate limiter (also referred to as the DRL in future). Both rate limiters come with different benefits and trade-offs, and in v2.3 we have opted to make the DRL the default rate limiter.

Hard-synchronised rate limiter

Here the limit is enforced using a pseudo “leaky bucket” mechanism: Tyk will record each request in a timestamped list in Redis, at the same time it will count the number of requests that fall between the current time, and the maximum time in the past that encompasses the rate limit (and remove these from the list). If this count exceeds the number of requests over the period, the request is blocked.

This approach means that rate limits are applied across all Gateway instances equally and near-instantaneously and also that the actual limit is a “moving window” so that there is no fixed point in time to flood the limiter or execute more requests than is permitted by any one client.

The downside of this rate limiter is that it puts a high amount of traffic to and from Redis, which can cause Redis itself to become a bottleneck in high-traffic situations.

Distributed Rate Limiter (DRL)

The distributed rate limiter operates on an eventual-consistency basis, it too is a “leaky bucket” algorithm, but the rate limiter is not synchronised explicitly via Redis across all instances. Instead, the rate limiter is entirely in-memory to the instance servicing the request, and the “size” of the token value is determined by a quorum established between Tyk instances that share a common zone or tag group.

This approach means that Tyk will continually measure load on each instance that is running, and then use the load across all instances to calculate a value by which to normalise the rate limiter leaky buckets across all instances – this happens eventually (within a second or so) and is entirely dynamic. If a new instance joins the cluster, or an instance leaves the cluster, the token bucket value is recalculated and rate limits rebalance.

The benefit of this approach is scalability and speed – the DRL is much more performant and puts much less pressure on Redis, meaning smaller deployments and higher availability.

Can I disable the rate limiter?

Yes, the rate limiter can be disabled for an API Definition by checking Disable Rate Limits in your API Designer, or by setting the value of disable_rate_limit to true in your API definition.

Can I rate limit by IP address?

Not yet, though IP-based rate limiting is possible using custom pre-processor middleware JavaScript that generates tokens based on IP addresses.

Was this article helpful to you? Yes No