Category: Uncategorised

How Secure Is Your API?

gandalf

You have researched the latest API design techniques. You have found the best framework to help you build it. You have all the latest tools in testing and debugging at your fingertips. Perhaps you even have an amazing developer portal setup. But, is your API protected against the common attack vectors?

Recent security breaches have involved APIs, giving anyone building out APIs to power their mobile apps, partner integrations, and SaaS products pause. By applying proper security practices and multiple layers of security, our API can be better protected.

Recent API Security Concerns

There have been several API security breaches that demonstrate some of the key vulnerabilities that can occur when using APIs. This includes:

These and other recent cases are causing API providers to pause and reassess their API security approach.

Essential API Security Features

Let’s first examine the essential security practices to protect your API:

Rate Limiting: Restricts API request thresholds, typically based on IP, API tokens, or more granular factors; prevents traffic spikes from negatively impacting API performance across consumers. Also prevents denial-of-service attacks, either malicious or unintentional due to developer error.

Protocol: Parameter filtering to block credentials and PII information from being leaked; blocking endpoints from unsupported HTTP verbs.

Session: Proper cross-origin resource sharing (CORS) to allow or deny API access based on the originating client; prevents cross site request forgery (CSRF) often used to hijack authorized sessions.

Cryptography: Encryption in motion and at rest to prevent unauthorized access to data.

Messaging: Input validation to prevent submitting invalid data or protected fields; parser attack prevention such as XML entity parser exploits; SQL and JavaScript injection attacks sent via requests to gain access to unauthorized data.

Taking a Layered Approach to Security

As an API provider, you may look at the list above and wonder how much additional code you’ll need to write to secure your APIs. Fortunately, there are some solutions that can protect your API from incoming requests across these various attack vectors – with little-to-no change to your code in most circumstances:

API Gateway: Externalizes internal services; transforms protocols, typically into web APIs using JSON and/or XML. May offer basic security options through token-based authentication and minimal rate limiting options. Typically does not address customer-specific, external API concerns necessary to support subscription levels and more advanced rate limiting.

API Management: API lifecycle management, including publishing, monitoring, protecting, analyzing, monetizing, and community engagement. Some API management solutions also include an API gateway.

Web Application Firewall (WAF): Protects applications and APIs from network threats, including Denial-of-Service (DoS) attacksand common scripting/injection attacks. Some API management layers include WAF capabilities, but may still require a WAF to be installed to protect from specific attack vectors.

Anti-Farming/Bot Security: Protect data from being aggressively scraped by detecting patterns from one or more IP addresses.

Content Delivery Network (CDN): Distribute cached content to the edge of the Internet, reducing load on origin servers while protecting them from Distributed Denial-of-Service (DDoS) attacks. Some CDN vendors will also act as a proxy for dynamic content, reducing the TLS overhead and unwanted layer 3 and layer 4 traffic on APIs and web applications.

Identity Providers (IdP): Manage identity, authentication, and authorization services, often through integration with API gateway and management layers.

Review/Scanning: Scan existing APIs to identify vulnerabilities before release

When applied in a layered approach, you can protect your API more effectively:

diagramf

How Tyk Helps Secure Your API

Tyk is an API management layer that offers a secure API gateway for your API and microservices. Tyk implements security such as:

  • Quotas and Rate Limiting to protect your APIs from abuse
  • Authentication using access tokens, HMAC request signing, JSON Web tokens, OpenID Connect, basic auth, LDAP, Social OAuth (e.g. GPlus, Twitter, Github) and legacy Basic Authentication providers
  • Policies and tiers to enforce tiered, metered access using powerful key policies

Carl Reid, Infrastructure Architect, Zen Internet found that Tyk was a good fit for their security needs:

“Tyk complements our OpenID Connect authentication platform, allowing us to set API access / rate limiting policies at an application or user level, and to flow through access tokens to our internal APIs.”

When asked why they chose Tyk instead of rolling their own API management and security layer, Carl mentioned that it helped them to focus on delivering value quickly:

“Zen have a heritage of purpose building these types of capabilities in house. However after considering whether this was the correct choice for API management and after discovering the capabilities of Tyk we decided ultimately against it. By adopting Tyk we enable our talent to focus their efforts on areas which add the most value and drive innovation which enhances Zen’s competitive advantage”

Find out more about how Tyk can help secure your API here.

What could be better? What needs to be added? Tell us, we can take it!

At Tyk, we are committed to helping people connect systems. We have some pretty great ideas on how that should happen. Tens of thousands of Tyk users agree. Maybe even you?

When it comes to our roadmap, we listen to the thousands of open source users, Pro license holders and cloud subscribers. We listen, build and release. Always getting better. Fast.

We believe we have the most transparent roadmap in the industry, its a trello board, it sits here.

We would love you to contribute to this, let us know what your priorities are. You can feed into it by contacting us through the forum, github, gitter or email.

So this post is aimed at you. We know you have an opinion, we know you think there is a better way of doing things. That’s why you use Tyk and not “Boring McBoringface’s Monolith API Stack” So whether you have Community, Pro, Enterprise or Cloud edition – tell us what you want from Tyk.

Delivering performance with version 1.7.1

We get compared to other API gateways a lot, and we haven’t published any benchmarks recently, so we thought we’d do a major optimisation drive and make sure that Tyk was really competitive and really performant, so we’ve released version 1.7.1 of the gateway, and it flies.

As usualy you can get v1.7.1 from our github releases page. It’s fully ocompatible with the latest dashboard (0.9.5), so you can literally do a drop-in replacement of the binary to get the improvements. Nothing else is needed.

We say there’s performance improvements, so what are they? Lets look back, in the last round of benchmarks we did, we found that tyk could handle about 400 requests per second before getting sweaty, these were done using various tools (Gattling and LoadRunner on various hardware, including a local VM!).

With version 1.7 we found that you could push that a bit higher to around 600rps with some smaller optimisations where we offloaded non-returning write operations off the main thread into goroutines.

Now with 1.7.1 we’ve made Tyk work well at over 1000 rps on cheaper hardware.

Setup

In our earlier benchmarks we used 4-core machines since Tyk is CPU bound, so the more cores, the better. But 4 cores are pricey and you don’t want to be running a fleet of them. So the benchmarks presented below were set up on a much smaller 2-core machine. You can find the full test setup detail at the bottom of this blog post.

In brief, the test setup consisted of three Digital Ocean servers in their $0.03 p/h price bracket, these are 2GB, 2-Core Ubuntu boxes, these suckers are cheaper than the dual core offerings at AWS, which is why we picked them (AWS t2.medium instances are actually pretty competitive, but their burstable CPU capacity actually causes locks when it kicks in and they are still 20 cents more expensive at $0.052 per hour, so they aren’t great for High Availability tests).

The test was performed using blitz.io, in “rush” mode from 0 to 2000 concurrent connections. That totalled at about 1,890 requests per second at the end of the test.

Results

Tyk 1.7.1 performance benchmarks

Tyk 1.7.1 performance benchmarks

The average latency of 20ms is down to the load generators operating out of Virginia (AWS) and the Digital Ocean servers living in New York. When we ran comparison benchmarks using AWS instances, we found the latency was around 6ms, so the overhead is network latency, not software generated.

As can be seen from the results, Tyk 1.7.1 performed well with an average 28ms latency overall, serving up 115,077 requests in 120 seconds with an average of 929 requests per second, that translates to about 82,855,440 hits/day

This is on a single node that costs 3 cents per hour. We’re pretty chuffed with that. We can see there’s a few errors and timeouts, but at a negligible level compared to the traffic volume hitting the machine.

This release marks a major performance and stability boost to the Tyk gateway which we’re incredibly proud to share with our users.

As always, get in touh with us in the comments or in our community portal.

Test Setup In Detail

The test setup involved three 2GB/2Core 40GB Digital Ocean ubuntu 14.04 instances and configured for high network traffic by increasing these limits:

Add fs.file-max=80000 to /etc/sysctl.conf

Added the following lines to /etc/security/limits.conf

* soft nproc 80000
* hard nproc 80000
* soft nofile 80000
* hard nofile 80000

Tyk gateway was installed on one server, Redis and MongoDB on another and Nginx on the third. Each was left with the bare bones setup except to ensure that they bound to a public interface so we could access them from outside.

Tyk was configured with the redis and mongoDB details and had the redis connection pool set to 2500, with the optimisations_use_async_session_write option enabled. Analytics were set to purge every 5 seconds. Tyk was simply acting as a proxy so the API definition was keyless.

Redis and NginX were left as is, however Digital Ocean have their own configurations for these so they are already optmised with recommended HA settings. If you are using vanilla repositories, then you may need to configure both Redis and NginX to handle more connections by default for a rush test.

Why we built Tyk

Two years ago we built Loadzen, it did OK and users wanted more features, next thing you know we were extending it all over the place.

Loadzen became a monolith. A year ago we refactored the application to be completely service based, and built a really shiny API for it which our frontend was AngularJS and backend was Tornado based. It was basically a single page webapp, and we wanted to eventually expose the API to a new CLI we had built.

The thing is, when it came to deciding on how to integrate the key and authorisation handling it meant adding a massive additional component to our application, some of these were:

  • Managing keys in Redis
  • Machinery for revoking and re-validating keys
  • Keeping bearer tokens (for the webapp) and API tokens separate, and managing different expiry rates
  • Rate limiting and quotas, we didn’t want to be flooded

The list goes on, it turned out shoehorning all this functionality into our existing authentication and security infrastructure was hard, and writing rate-limiting code just seemed ridiculous for our needs. that’s when we came upon the idea of using an API Gateway.

Now there’s plenty out there, including paid ones – and we briefly considered using 3Scale, but what put us off all of these was that we simply wanted a component we could plug into our architecture with minimal fuss, and that would use our existing security mechanisms with a minimum of rewrites.

That’s how Tyk came about, and we loved Golang, it seemed like the perfect fit – it was fast, had an excellent standard library, made it easy to write service-level components.

We dogfooded the Tyk core gateway on Loadzen for 3-4 months to make sure it wasn’t leaking memory or eating up resources (we never launched the API, but we had Tyk handle all web-app based requests and bearer tokens), it performed admirably. So we decided to extend it a bit.

That’s how we built Tyk, and why we built it. We made a conscious decision to make the core open source, nobody installs network-level components anymore if they can’t look at the source. We also wanted to make sure that the project grew and grew – this is our first work with Golang, and we think that with some community support and early adopters, tyk can become a really useful tool in the system and developers arsenal.

Scroll to top