Putting Tyk Through its Paces

The Tyk Open Source API Gateway is performant and efficient, and we’re not shy to put numbers where our mouths are. We load test every release of Tyk both with canary build on Tyk Cloud, and pre-release with blitz.io rush tests, here are some of the results.

For those who just want the highlights:

If you want to see all the pretty graphs, the full results and analysis continue below…

A vanilla test

The following results are based on two different tests. One is a fully-loaded test from 0 to 2,000 users in 2 minutes, the other is a minimal vanilla test from 0 to 3,000 users in 3 minutes. Our gateway was set up with an external Redis database, a gateway server and a target host. All three servers were 2-core, 2GB RAM servers from Digital Ocean, servers that cost about $20 a month to run.

So, lets start with a vanilla test – this is the kind of test that you will find elsewhere – and is what most providers will publish, this first test is Tyk 2.3 running an open API (no authentication), but only recording analytics data.

Basically, this test shows us how well Tyk proxies a request doing very little work

The Skinny

So here we can see, Tyk drops no requests, produces no errors and keeps a nice steady pace between ~20ms to ~65ms latency all the way up to ~3,000 requests per second.

At this point, Tyk is still recording a bunch of analytics data that is being stored in Redis.

What does it mean?

Tyk is a pretty performant proxy – and it’s adding very little overhead to the requests, keeping performance steady and ensuring that requests get through without causing a fuss. But most importantly, even when Tyk is doing nothing it is still doing work, it’s recording analytics.

A real test

Now let’s look at a real test, with Tyk actually doing some of the activities you would expect from an API Gateway, under some decent load, with a representative number of client tokens. The activities we are evaluating here are on a closed API, and we want to see the Gateway:

Test analysis

Now that’s a different picture! Here we can see that Tyk is really working hard, in fact, in contrast to the vanilla test, we can see some strain..

Most importantly, we can see that Tyk is performing swimmingly (remember, this is a $20 server!), in this test we expect there to be resource limitations to be hit, and for the node to start tweaking out during higher load. Given these expectations, here we can see that:

This test had to do a lot more work, for each request, Tyk evaluated the inbound token for its access rights, checked its quota, checked its rate limit was still acceptable and then proxied the request. It then recorded the analytics of the request and response and stored that in the database. All of that with minimal latency on cheap commodity hardware.