Category: Uncategorised

7 Critical Factors For Selecting Your API Management Layer

Remember the final scene of Indiana Jones and the Last Crusade, where the old Knight says: “You must choose, but choose wisely”? Admittedly Indy wasn’t choosing an API Management Solution and certainly had other things on his plate that day. However the adage still applies and to help you choose wisely we’ve got James Higginbotham, who has put together a list of the seven critical factors to consider when choosing your API Management Layer.


I’m often asked which API management layer is the best one available today. The answer is always, “It depends”. Whether you are considering an open source or closed source API management layer, the number of vendors and options available today are astounding. Many API management solutions focus on delivering specific capabilities, while others strive to cover a breadth of features but don’t go very deep in all areas. This article will shed some light on how to approach the decision making process for managing your API, so that you can ensure the needs of your business, product, and development teams are met.

Why Do You Need API Management?

For those unfamiliar, API management layers accelerate the deployment, monitoring, security, versioning, and sharing of APIs. They are often deployed as a reverse proxy, intercepting all incoming API request traffic and applying rules to determine if requests should be routed to the API. In addition to traffic management, they commonly offer:

  • Token-based authorization support through API-key based authentication and/or OAuth 2
  • Deployment and versioning support for redirecting incoming requests to the current or newly deployed release of an API
  • Rate limiting to reduce the impact of greedy API clients and denial of service (DoS) attacks
  • Developer portals for hosted documentation and self-onboarding by developers
  • Administrative portals for viewing usage reports
  • Billing and payment support for selling subscription-based access to your API
  • On-premise, cloud, and hybrid hosting deployment options

API management layers may be offered as purely closed source, purely open source, or in a hybrid model using a combination of open source components and closed source offerings.

Factor #1: Self-hosted and SaaS deployment options

Your deployment requirements are a huge factor in API management layer selection. While most vendors offer managed cloud-based options, some choose to do so only during the early stages of your API, requiring you to move to an on-premise solution as your traffic increases. Knowing how you need to deploy your API management layer, including the resources available to monitor and maintain it, is important to the selection process. Look for a vendor that offers the kind of deployment you require: on-premise or managed cloud services. If you are unsure, select a vendor that offers a seamless transition from one to the other, such as

Factor #2: Simple installation process

If your API management layer will reside within your own cloud environment or data center rather than hosted, then installation needs to be simple. Evaluate the installation process to ensure that standing up new instances and new environments (e.g. staging, UAT, integration) will be easy – and preferably automated. If you prefer containerization, consider vendors that offer a container-based distribution to reduce the effort required to support your deployment process.

Factor #3: Meets feature requirements

Part of your selection process should include an evaluation. We covered this in a previous article, but I’ll repeat it here for reference. Your evaluation should include the following considerations:

  • Authorization – can you implement your desired authorization mechanism (e.g. API tokens, keys, OAuth 2, etc) to meet your needs?
  • Performance – how much overhead does the layer require for each request? Measure the performance of your API endpoints before and after installing the API management layer. Expect some reduction in performance, but also ensure that the management layer doesn’t cause a drastic decrease in performance that may require additional server capacity
  • Security – perform some basic penetration testing to verify that the layer is catching common attack vectors. Attacks such as SQL injection, denial of service attack prevention through rate limiting, and other attacks can often be simulated with some simple scripts
  • Onboarding – how easy or hard will it be for your developers to get onboarded? Does the onboarding process support the business, product, and technical needs of your company?
  • Reporting – does the management layer provide the information you will need on a day-to-day basis to better serve your developers? Can you export data via an API or push it into an external reporting solution easily, for integration into other daily/weekly reports?

Factor #4: Customization should not required

I was recently discussing the abundance of infrastructure tools available to development teams today. With every tool comes the burden of understanding it and getting it integrated into your environment. Some tools choose to offer a variety of options, but require considerable effort to get them running. Be sure to evaluate the effort required to start using the API management layer. Customization options are great, but if you can’t get started easily or without installing lots of plugins, you need to know this ahead of time.

Factor #5: Easy upgrades

Whatever solution you select, you will need to keep it upgraded to ensure you have the latest improvements and available features. Evaluate the upgrade process by reading past release notes to better understand the process that will likely be required. If there are no release or upgrade notes, then that should generate a concern. Just keep in mind that some commercial offerings only supply these details directly to customers or via a customer portal. If you don’t find anything, contact the vendor to ensure that they are available to paying customers.

Factor #6: Vendor viability

We all want API management vendors to experience growth and success. However, not everyone will be around in the long term. Consider the vendor’s viability by understanding their revenue model. For open source solutions, take into consideration the companies backing the solution, along with the community that is supporting it. If there isn’t much activity, then the solution may become abandoned in the future.

Factor #7: Management Automation

Finally, consider the automation options available to configure, manage, and integrate the solution into your operations processes. Vendors that offer APIs for every feature available in their configuration APIs, along with reporting APIs and webhooks for important events ensure that you can easily automate changes and integrate it into your deployment process.


As you have likely realized, it isn’t easy to select an API management layer. However, your decision will have ramifications for months or years to come. It may offer tremendous flexibility or severely limit your options in the future. Take the time to properly evaluate the API management layer that best fits your needs.

What could be better? What needs to be added? Tell us, we can take it!

At Tyk, we are committed to helping people connect systems. We have some pretty great ideas on how that should happen. Tens of thousands of Tyk users agree. Maybe even you?

When it comes to our roadmap, we listen to the thousands of open source users, Pro license holders and cloud subscribers. We listen, build and release. Always getting better. Fast.

We believe we have the most transparent roadmap in the industry, its a trello board, it sits here.

We would love you to contribute to this, let us know what your priorities are. You can feed into it by contacting us through the forum, github, gitter or email.

So this post is aimed at you. We know you have an opinion, we know you think there is a better way of doing things. That’s why you use Tyk and not “Boring McBoringface’s Monolith API Stack” So whether you have Community, Pro, Enterprise or Cloud edition – tell us what you want from Tyk.

Delivering performance with version 1.7.1

We get compared to other API gateways a lot, and we haven’t published any benchmarks recently, so we thought we’d do a major optimisation drive and make sure that Tyk was really competitive and really performant, so we’ve released version 1.7.1 of the gateway, and it flies.

As usualy you can get v1.7.1 from our github releases page. It’s fully ocompatible with the latest dashboard (0.9.5), so you can literally do a drop-in replacement of the binary to get the improvements. Nothing else is needed.

We say there’s performance improvements, so what are they? Lets look back, in the last round of benchmarks we did, we found that tyk could handle about 400 requests per second before getting sweaty, these were done using various tools (Gattling and LoadRunner on various hardware, including a local VM!).

With version 1.7 we found that you could push that a bit higher to around 600rps with some smaller optimisations where we offloaded non-returning write operations off the main thread into goroutines.

Now with 1.7.1 we’ve made Tyk work well at over 1000 rps on cheaper hardware.


In our earlier benchmarks we used 4-core machines since Tyk is CPU bound, so the more cores, the better. But 4 cores are pricey and you don’t want to be running a fleet of them. So the benchmarks presented below were set up on a much smaller 2-core machine. You can find the full test setup detail at the bottom of this blog post.

In brief, the test setup consisted of three Digital Ocean servers in their $0.03 p/h price bracket, these are 2GB, 2-Core Ubuntu boxes, these suckers are cheaper than the dual core offerings at AWS, which is why we picked them (AWS t2.medium instances are actually pretty competitive, but their burstable CPU capacity actually causes locks when it kicks in and they are still 20 cents more expensive at $0.052 per hour, so they aren’t great for High Availability tests).

The test was performed using, in “rush” mode from 0 to 2000 concurrent connections. That totalled at about 1,890 requests per second at the end of the test.


Tyk 1.7.1 performance benchmarks

Tyk 1.7.1 performance benchmarks

The average latency of 20ms is down to the load generators operating out of Virginia (AWS) and the Digital Ocean servers living in New York. When we ran comparison benchmarks using AWS instances, we found the latency was around 6ms, so the overhead is network latency, not software generated.

As can be seen from the results, Tyk 1.7.1 performed well with an average 28ms latency overall, serving up 115,077 requests in 120 seconds with an average of 929 requests per second, that translates to about 82,855,440 hits/day

This is on a single node that costs 3 cents per hour. We’re pretty chuffed with that. We can see there’s a few errors and timeouts, but at a negligible level compared to the traffic volume hitting the machine.

This release marks a major performance and stability boost to the Tyk gateway which we’re incredibly proud to share with our users.

As always, get in touh with us in the comments or in our community portal.

Test Setup In Detail

The test setup involved three 2GB/2Core 40GB Digital Ocean ubuntu 14.04 instances and configured for high network traffic by increasing these limits:

Add fs.file-max=80000 to /etc/sysctl.conf

Added the following lines to /etc/security/limits.conf

* soft nproc 80000
* hard nproc 80000
* soft nofile 80000
* hard nofile 80000

Tyk gateway was installed on one server, Redis and MongoDB on another and Nginx on the third. Each was left with the bare bones setup except to ensure that they bound to a public interface so we could access them from outside.

Tyk was configured with the redis and mongoDB details and had the redis connection pool set to 2500, with the optimisations_use_async_session_write option enabled. Analytics were set to purge every 5 seconds. Tyk was simply acting as a proxy so the API definition was keyless.

Redis and NginX were left as is, however Digital Ocean have their own configurations for these so they are already optmised with recommended HA settings. If you are using vanilla repositories, then you may need to configure both Redis and NginX to handle more connections by default for a rush test.

Why we built Tyk

Two years ago we built Loadzen, it did OK and users wanted more features, next thing you know we were extending it all over the place.

Loadzen became a monolith. A year ago we refactored the application to be completely service based, and built a really shiny API for it which our frontend was AngularJS and backend was Tornado based. It was basically a single page webapp, and we wanted to eventually expose the API to a new CLI we had built.

The thing is, when it came to deciding on how to integrate the key and authorisation handling it meant adding a massive additional component to our application, some of these were:

  • Managing keys in Redis
  • Machinery for revoking and re-validating keys
  • Keeping bearer tokens (for the webapp) and API tokens separate, and managing different expiry rates
  • Rate limiting and quotas, we didn’t want to be flooded

The list goes on, it turned out shoehorning all this functionality into our existing authentication and security infrastructure was hard, and writing rate-limiting code just seemed ridiculous for our needs. that’s when we came upon the idea of using an API Gateway.

Now there’s plenty out there, including paid ones – and we briefly considered using 3Scale, but what put us off all of these was that we simply wanted a component we could plug into our architecture with minimal fuss, and that would use our existing security mechanisms with a minimum of rewrites.

That’s how Tyk came about, and we loved Golang, it seemed like the perfect fit – it was fast, had an excellent standard library, made it easy to write service-level components.

We dogfooded the Tyk core gateway on Loadzen for 3-4 months to make sure it wasn’t leaking memory or eating up resources (we never launched the API, but we had Tyk handle all web-app based requests and bearer tokens), it performed admirably. So we decided to extend it a bit.

That’s how we built Tyk, and why we built it. We made a conscious decision to make the core open source, nobody installs network-level components anymore if they can’t look at the source. We also wanted to make sure that the project grew and grew – this is our first work with Golang, and we think that with some community support and early adopters, tyk can become a really useful tool in the system and developers arsenal.

Scroll to top