Performance Benchmarks

As a critical component of your architecture, the API Gateway has an obligation to deliver high performance throughput and latency of your API traffic, regardless of what it looks like.

Performance Matters

All Tyk components are written in Go (also called Golang or Go language), the language of Kubernetes and Docker. Why Go? As a compiled language, Go is typically 40 times faster than Python, and considerably faster than Lua the language of choice for other popular API Gateway vendors such as Nginx, Apache APISIX, and Kong.

Why does that matter? In the interest of efficiency and cost savings, you want the most performant API gateway for your business. Higher performance results in better user experience as well as a lower cost burden on your infrastructure. We reference the Request Per Second (RPS) and latency of the 99th percentile of requests (P99) as our standards of performance in the following tests.

The tests in this article were conducted on 3 different clouds, AWS, GCP, and Azure with 4 classes of machines on each cloud. Each class of machines is similar in CPU resources, however, the RAM and networking capabilities differ which causes slight variance between the clouds which is expected. Each set of tests was run 3 times and for a duration of 5 minutes per run. For a detailed breakdown of the exact testing environments and the machine specifications, please see this repository.

Tyk Middleware Analysis

In this section. We are testing Tyk with various middleware functions enabled. Here is breakdown of all the conducted tests:

  • Analytics: Analytics recording enabled.
  • Auth: Encrypted API key authentication enabled.
  • Auth & Quota: Encrypted API key authentication enabled as well as quota management.
  • Rate-limiting: API-level rate-limiting enabled.
  • All: All the above middleware functions enabled.
  • Vanilla: Tyk is configured as a transparent reverse proxy, with no middleware.

There are multiple factors that affect performance, such as payload sizes, connection type / length, the hardware itself, SSL considerations, which Tyk features are enabled, and more. Considerable care was taken to minimise the confounding variables and in the interest of transparency we have open sourced our entire testing methodology.

With that said, Tyk is clearly capable of handling substantial amounts of traffic, and as one would expect, scales efficiently with hardware.

Automated Performance Testing Using Ansible

The metrics in this article was generated using our performance testing open source repository.

You may become familiar with the repository and use it yourself by watching Tyk’s Zaid Albirawi Introduction to Tyk Ansible Performance Testing.

You may also reproduce all the results on your own infrastructure. The following folder contains the shell scripts to generate all the metrics used in this article.

Tyk vs Kong

In the following tests. Tyk and Kong are benchmarked against one another with different plugins enabled. The Kong plugins tested against Tyk’s native authentication, rate-limiting and quota functionalities are:

Summary of results:

In vanilla testing Tyk and Kong achieve very similar results. This is expected, as both gateways are acting as transparent reverse proxies with minimal computational overhead. In practice however, one would expect to leverage an API gateway to take advantage of features such as rate limiting and authentication. In a real world use case, Tyk outperforms Kong on all three major cloud providers in both the RPS and P99 metrics. Note that as hardware allocation increases, there is almost a linear scaling to the performance of Tyk, indicating an extremely efficient use of resources. The same cannot be said for Kong, which does not appear to scale as effectively past 8 cores. This behaviour is consistent across all 3 major cloud providers.

Tyk vs Apollo

In the following tests, we aim to model and compare the performance of Tyk’s Universal Data Graph versus Apollo’s RESTDataSource at different graph query depths.

Note: As Tyk is built in Go with native multithreading, we ran Apollo in multithreaded mode with the npm cluster library.

Query Depth 1

1 REST request

  • 1 user
{
  user(id: ID) {
    username
    name
    email
  }
}

Query Depth 2

2 REST requests

  • 1 user
  • 10 posts
{
  user(id: ID) {
    username
    name
    email
    posts {
      title
      body
    }
  }
}

Query Depth 3

12 REST requests

  • 1 user
  • 10 posts
  • 100 comments
{ 
  user(id: ID) {
    username
    name
    email
    posts {
      title
      body
      comments {
        name
        email
        body
      }
    }
  }
}

Summary of results:

Tyk and Apollo both achieve similar API performance on a machine with 2 cores. However, as resource allocation increases, the performance delta between the two dramatically increases with Tyk outperforming Apollo across all major cloud providers. This illustrates Tyk’s ability to scale efficiently and effectively with hardware.