Category: Uncategorised

What could be better? What needs to be added? Tell us, we can take it!

At Tyk, we are committed to helping people connect systems. We have some pretty great ideas on how that should happen. Tens of thousands of Tyk users agree. Maybe even you?

When it comes to our roadmap, we listen to the thousands of open source users, Pro license holders and cloud subscribers. We listen, build and release. Always getting better. Fast.

We believe we have the most transparent roadmap in the industry, its a trello board, it sits here.

We would love you to contribute to this, let us know what your priorities are. You can feed into it by contacting us through the forum, github, gitter or email.

So this post is aimed at you. We know you have an opinion, we know you think there is a better way of doing things. That’s why you use Tyk and not “Boring McBoringface’s Monolith API Stack” So whether you have Community, Pro, Enterprise or Cloud edition – tell us what you want from Tyk.

Delivering performance with version 1.7.1

We get compared to other API gateways a lot, and we haven’t published any benchmarks recently, so we thought we’d do a major optimisation drive and make sure that Tyk was really competitive and really performant, so we’ve released version 1.7.1 of the gateway, and it flies.

As usualy you can get v1.7.1 from our github releases page. It’s fully ocompatible with the latest dashboard (0.9.5), so you can literally do a drop-in replacement of the binary to get the improvements. Nothing else is needed.

We say there’s performance improvements, so what are they? Lets look back, in the last round of benchmarks we did, we found that tyk could handle about 400 requests per second before getting sweaty, these were done using various tools (Gattling and LoadRunner on various hardware, including a local VM!).

With version 1.7 we found that you could push that a bit higher to around 600rps with some smaller optimisations where we offloaded non-returning write operations off the main thread into goroutines.

Now with 1.7.1 we’ve made Tyk work well at over 1000 rps on cheaper hardware.


In our earlier benchmarks we used 4-core machines since Tyk is CPU bound, so the more cores, the better. But 4 cores are pricey and you don’t want to be running a fleet of them. So the benchmarks presented below were set up on a much smaller 2-core machine. You can find the full test setup detail at the bottom of this blog post.

In brief, the test setup consisted of three Digital Ocean servers in their $0.03 p/h price bracket, these are 2GB, 2-Core Ubuntu boxes, these suckers are cheaper than the dual core offerings at AWS, which is why we picked them (AWS t2.medium instances are actually pretty competitive, but their burstable CPU capacity actually causes locks when it kicks in and they are still 20 cents more expensive at $0.052 per hour, so they aren’t great for High Availability tests).

The test was performed using, in “rush” mode from 0 to 2000 concurrent connections. That totalled at about 1,890 requests per second at the end of the test.


Tyk 1.7.1 performance benchmarks

Tyk 1.7.1 performance benchmarks

The average latency of 20ms is down to the load generators operating out of Virginia (AWS) and the Digital Ocean servers living in New York. When we ran comparison benchmarks using AWS instances, we found the latency was around 6ms, so the overhead is network latency, not software generated.

As can be seen from the results, Tyk 1.7.1 performed well with an average 28ms latency overall, serving up 115,077 requests in 120 seconds with an average of 929 requests per second, that translates to about 82,855,440 hits/day

This is on a single node that costs 3 cents per hour. We’re pretty chuffed with that. We can see there’s a few errors and timeouts, but at a negligible level compared to the traffic volume hitting the machine.

This release marks a major performance and stability boost to the Tyk gateway which we’re incredibly proud to share with our users.

As always, get in touh with us in the comments or in our community portal.

Test Setup In Detail

The test setup involved three 2GB/2Core 40GB Digital Ocean ubuntu 14.04 instances and configured for high network traffic by increasing these limits:

Add fs.file-max=80000 to /etc/sysctl.conf

Added the following lines to /etc/security/limits.conf

* soft nproc 80000
* hard nproc 80000
* soft nofile 80000
* hard nofile 80000

Tyk gateway was installed on one server, Redis and MongoDB on another and Nginx on the third. Each was left with the bare bones setup except to ensure that they bound to a public interface so we could access them from outside.

Tyk was configured with the redis and mongoDB details and had the redis connection pool set to 2500, with the optimisations_use_async_session_write option enabled. Analytics were set to purge every 5 seconds. Tyk was simply acting as a proxy so the API definition was keyless.

Redis and NginX were left as is, however Digital Ocean have their own configurations for these so they are already optmised with recommended HA settings. If you are using vanilla repositories, then you may need to configure both Redis and NginX to handle more connections by default for a rush test.

Why we built Tyk

Two years ago we built Loadzen, it did OK and users wanted more features, next thing you know we were extending it all over the place.

Loadzen became a monolith. A year ago we refactored the application to be completely service based, and built a really shiny API for it which our frontend was AngularJS and backend was Tornado based. It was basically a single page webapp, and we wanted to eventually expose the API to a new CLI we had built.

The thing is, when it came to deciding on how to integrate the key and authorisation handling it meant adding a massive additional component to our application, some of these were:

  • Managing keys in Redis
  • Machinery for revoking and re-validating keys
  • Keeping bearer tokens (for the webapp) and API tokens separate, and managing different expiry rates
  • Rate limiting and quotas, we didn’t want to be flooded

The list goes on, it turned out shoehorning all this functionality into our existing authentication and security infrastructure was hard, and writing rate-limiting code just seemed ridiculous for our needs. that’s when we came upon the idea of using an API Gateway.

Now there’s plenty out there, including paid ones – and we briefly considered using 3Scale, but what put us off all of these was that we simply wanted a component we could plug into our architecture with minimal fuss, and that would use our existing security mechanisms with a minimum of rewrites.

That’s how Tyk came about, and we loved Golang, it seemed like the perfect fit – it was fast, had an excellent standard library, made it easy to write service-level components.

We dogfooded the Tyk core gateway on Loadzen for 3-4 months to make sure it wasn’t leaking memory or eating up resources (we never launched the API, but we had Tyk handle all web-app based requests and bearer tokens), it performed admirably. So we decided to extend it a bit.

That’s how we built Tyk, and why we built it. We made a conscious decision to make the core open source, nobody installs network-level components anymore if they can’t look at the source. We also wanted to make sure that the project grew and grew – this is our first work with Golang, and we think that with some community support and early adopters, tyk can become a really useful tool in the system and developers arsenal.

Scroll to top