Putting Tyk Through its Paces
The Tyk Open Source API Gateway is performant and efficient, and we’re not shy to put numbers where our mouths are. We load test every release of Tyk both with canary build on Tyk Cloud. Here are some of the results.
Tyk can easily proxy ~4,000 requests per second, with a pretty flat performance curve, and minimal added latency.
Under load, doing full key validation, security checks, quota management, and analytics gathering, Tyk can handle ~3,000 requests per second.
All of this on a 2-core, 2GB, $20 p/m commodity virtual server.
Our performance testing plan focuses on replicating a realistic customer installation on cheap commodity VMs. This means not optimizing for “benchmarks”: so no supercomputers and no sub-millisecond inner DC latency. Instead, we were testing on average performance 2 CPU virtual server, with 50ms latency between Tyk and upstream. For testing, we use Tyk Gateway in Hybrid mode, with default config. Test runner was using Locust framework and Boomer for load generation.
So, let’s start with a vanilla test – this is the kind of test that you will find elsewhere – and is what most providers will publish, this first test is Tyk 2.7 running an open API (no authentication), but only recording analytics data.
Basically, this test shows us how well Tyk proxies a request doing very little work
So here we can see, Tyk drops no requests, produces no errors and keeps a nice steady latency all the way to ~4,000 requests per second.
At this point, Tyk is still recording a bunch of analytics data that is being stored in Redis.
What does it mean?
Tyk is a pretty performant proxy – and it’s adding very little overhead to the requests, keeping performance steady and ensuring that requests get through without causing a fuss. But most importantly, even when Tyk is doing nothing it is still doing work, it’s recording analytics.
Now let’s look at a real test, with Tyk actually doing some of the activities you would expect from an API Gateway, under some decent load, with a representative number of client tokens. The activities we are evaluating here are on a closed API, and we want to see the Gateway:
Here we can see that Tyk is really working hard, with a minimal 25% performance drop compared to Vanilla test, and keeping smooth latency with up to 3000 requests per second.
Most importantly, we can see that Tyk is performing swimmingly (remember, this is a $20 server!), in this test we expect there to be resource limitations to be hit, and for the node to start tweaking out during higher load.
This test had to do a lot more work, for each request, Tyk evaluated the inbound token for its access rights, checked its quota, checked its rate limit was still acceptable and then proxied the request. It then recorded the analytics of the request and response and stored that in the database. All of that with minimal latency on cheap commodity hardware.
Tyk Open Source Gateway on GitHub