Category: Uncategorised

Meet our Singapore team at CommunicAsia – May 23-25

23 - 25 May 2017, Tuesday to Thursday, Basement 2 to Level 5, Suntec Singapore <3 Day Show!>” />
<td width= You Are Invited
CommunicAsia2017 EnterpriseIT2017
Stand No. BF2-10
Tyk Technologies Ltd would like to invite you to Stand No. BF2-10 at CommunicAsia2017 and EnterpriseIT2017 – Asia’s premier integrated info-communications technology event. Occupying Basement 2 to Level 5 of Marina Bay Sands, Singapore, both shows will feature an exciting line-up of the latest innovations, industry heavyweights and leading experts.
So drop by our stand to say hi and check out our latest products! To schedule an appointment or to get in touch with our team who will be at the show, contact Zane Lim at +65 6813 2083 / [email protected].
Tyk is a leading open source API Management platform comprising; API Gateway, Dashboard, Portal and Analytics. Tyk makes it simple and cost-effective to manage an organisation’s APIs with it`s distinctive features.
Download CommunicAsia's Visiting Brochure Download EnterpriseIT`s Visiting Brochure

When is Hypermedia and HATEOAS a Right Fit For Your API?

Previously, we discussed the value of offering events from your API that can extend the conversation you have with your consumers. Another consideration for extending your API conversation is to add hypermedia and HATEOAS support. Let’s understand more about these concepts and understand the value they bring to our APIs.

What is a Hypermedia API?

A hypermedia API is one driven by self-descriptive links that point to other, related API endpoints. Often, these links point to other resources that are related, e.g. the owner of a project, or to relevant endpoints based on the context of the consumer.

For example, the Github API offers hypermedia links whenever you request the details on a specific user:

  "login": "launchany",
  "id": 17768866,
  "avatar_url": "",
  "gravatar_id": "",
  "url": "",
  "html_url": "",
  "followers_url": "",
  "following_url": "{/other_user}",
  "gists_url": "{/gist_id}",


If an API supplies hypermedia links that are context-sensitive and change based on where the user is within the API and what features and functionality are currently available to the consumer, the API is said to apply the HATEOAS constraint. HATEOAS (“Hypermedia As The Engine Of Application State”) is a constraint within REST that originated in Fielding’s dissertation.

How Does Hypermedia Extend the API Conversation?

1. Hypermedia Informs Consumers About What Is Possible

Hypermedia helps connect the dots of your API, making it more like the web. Imagine using your favourite search engine to find some results, only to never click on any of the results. Unfortunately, that is the way we design most of our APIs – without offering the consumer the opportunity to explore the depth of the data and capabilities offered by your API.

APIs with hypermedia extend the conversation of your API by offering runtime discovery of capabilities. They help API consumers realise what is possible – and what isn’t – when using your API.

2. Hypermedia Enables API Evolvability

Not all APIs stay the same forever. APIs are like any other product – they have a product lifecycle that includes growing in maturity over time. Hypermedia links allow the API to evolve over time, exposing new capabilities as they emerge without disrupting existing API consumers.

Note: I didn’t mention anything about hypermedia protecting clients from changing URLs. That’s because preventing clients from having to think or compute URLs is a side effect, not the primary reason to select Hypermedia.

3. Hypermedia Encourages Loosely Coupling

By including hypermedia into resource representations, API designers are free to stop trying to include everything in every API response. API endpoints can focus on doing one thing properly, using hypermedia links to reference other details that may be fetched by API consumers if and when needed. This creates a loose-coupled API by separating out capabilities into separate, yet related, endpoints rather than forcing every piece of data to be included within each response.

Is Hypermedia a Fit For Your API?

Hypermedia is like “Choose Your Own Adventure” for API Consumers, allowing consumers to discover capabilities and drive execution. For all but the most basic of APIs, hypermedia will expand your API into a more dynamic conversation with your API consumers. It also enables your API to evolve and mature over time, by adding new capabilities as needed by your consumers.

5 Mistakes Your API Product Must Avoid

Many of my consulting engagements involve helping teams improve their API products. Over the last decade, I have seen some strategies work better than others. While not every product and circumstance is the same, there are some common mistakes that prevent teams from delivering a great API product. I want to share with you the top five mistakes teams make when it comes to their API product strategy and how you can avoid them.

Mistake #1: Solving the Wrong Problem

Many teams focus on the wrong problem, resulting in delivering an API product that fails to resonate with the target audience. To avoid this mistake, it helps to map out the API and how it fits into the common usage scenarios. This mapping exercise should capture the following:

  1. The typical pain point that your API solves, e.g. why your API needs to exist
  2. The overall workflow and how your API fits into it
  3. The target developer personas that your API targets
  4. How the API works in cooperation or competitively with other vendors
  5. How developers will likely discover and subsequently onboard with your API

Mistake #2: Lack of Clarity

Too often, our APIs start off as a series of isolated endpoints that don’t solve a larger problem. This can lead to confusion by developers considering the use of your API, as they may not be sure when and how it fits into their problem.

When first starting out, make sure your API is clear in the problem it is trying to solve, as well as what is isn’t trying to solve. An API that does one thing and does it well through clarity of focus far outweights an API loaded down with lots of disconnected features that doesn’t solve a single problem. Become hyper-focused on understanding the problem and then solving for that specific problem first, before you expand your API’s scope.

Mistake #3: Delivering Features Not Capabilities

Capabilities enable someone to achieve something they previously were not able, such as machine learning or SMS messaging. Features are the individual steps and/or mechanisms that allow them to achieve those outcomes.

As API product owners, we must be able to separate what we are helping our target audience achieve (the capabilities) vs. the features that help them get there (the API endpoints). If you are focused too much on the API design before you know what your audience is trying to achieve, then your API product will fall short. This is often the case for APIs built on top of a database, as the API simply focuses on data delivery rather than desired outcomes of what the API can do for the consumer.

Mistake #4: Lack of Product Ownership

Once your API is delivered into production, your job as API product owner is only getting started. APIs are just like any other product – they must operate on a product lifecycle that matures the product over time.

Most product owners find that there is a whole other world of opportunity that lies beyond the first version of a product. To get to this stage, you must always be focused on the next release. Define your API product roadmap, deliver continuously, and seek input from your stakeholders. Continuous feedback from stakeholders beyond the first release is critical for gaining traction and maturity.

Mistake #5: Long Design and Delivery Cycles

The longer your API takes to get into the hands of consumers, the longer your feedback loop with stakeholders. However, a rushed API often requires changes that will force consumers to adapt or die. How do API product teams balance the need for speed and consumer safety?

I recommend a design-first approach that identifies the capabilities to be delivered, designing the API to meet those capabilities, then delivering the API as fast as possible. To accelerate the delivery process, consider the following:

  1. Build an API delivery team using cross-functional resources: developers, QA, technical writers, and other roles necessary to deliver the API end-to-end
  2. Utilize API definition formats, such as the OpenAPI Specification, to communicate the API design to everyone involved
  3. Take advantage of mocking tools that allow for early experimentation with your OpenAPI definition to work out any design details early, before coding starts (when the cost of API design change is much lower)
  4. Keep stakeholders involved early and often through shared design documents and mockups
  5. Deliver the API continuously rather than all-at-once, using stakeholder feedback to prioritize the delivery schedule based on their needs

Remember: Once released, it is difficult to change an API design. Use this accelerated delivery process to expedite your learning and stakeholder feedback early, to avoid needing to make drastic design changes after your API is released.

How Secure Is Your API?


You have researched the latest API design techniques. You have found the best framework to help you build it. You have all the latest tools in testing and debugging at your fingertips. Perhaps you even have an amazing developer portal setup. But, is your API protected against the common attack vectors?

Recent security breaches have involved APIs, giving anyone building out APIs to power their mobile apps, partner integrations, and SaaS products pause. By applying proper security practices and multiple layers of security, our API can be better protected.

Recent API Security Concerns

There have been several API security breaches that demonstrate some of the key vulnerabilities that can occur when using APIs. This includes:

These and other recent cases are causing API providers to pause and reassess their API security approach.

Essential API Security Features

Let’s first examine the essential security practices to protect your API:

Rate Limiting: Restricts API request thresholds, typically based on IP, API tokens, or more granular factors; prevents traffic spikes from negatively impacting API performance across consumers. Also prevents denial-of-service attacks, either malicious or unintentional due to developer error.

Protocol: Parameter filtering to block credentials and PII information from being leaked; blocking endpoints from unsupported HTTP verbs.

Session: Proper cross-origin resource sharing (CORS) to allow or deny API access based on the originating client; prevents cross site request forgery (CSRF) often used to hijack authorized sessions.

Cryptography: Encryption in motion and at rest to prevent unauthorized access to data.

Messaging: Input validation to prevent submitting invalid data or protected fields; parser attack prevention such as XML entity parser exploits; SQL and JavaScript injection attacks sent via requests to gain access to unauthorized data.

Taking a Layered Approach to Security

As an API provider, you may look at the list above and wonder how much additional code you’ll need to write to secure your APIs. Fortunately, there are some solutions that can protect your API from incoming requests across these various attack vectors – with little-to-no change to your code in most circumstances:

API Gateway: Externalizes internal services; transforms protocols, typically into web APIs using JSON and/or XML. May offer basic security options through token-based authentication and minimal rate limiting options. Typically does not address customer-specific, external API concerns necessary to support subscription levels and more advanced rate limiting.

API Management: API lifecycle management, including publishing, monitoring, protecting, analyzing, monetizing, and community engagement. Some API management solutions also include an API gateway.

Web Application Firewall (WAF): Protects applications and APIs from network threats, including Denial-of-Service (DoS) attacksand common scripting/injection attacks. Some API management layers include WAF capabilities, but may still require a WAF to be installed to protect from specific attack vectors.

Anti-Farming/Bot Security: Protect data from being aggressively scraped by detecting patterns from one or more IP addresses.

Content Delivery Network (CDN): Distribute cached content to the edge of the Internet, reducing load on origin servers while protecting them from Distributed Denial-of-Service (DDoS) attacks. Some CDN vendors will also act as a proxy for dynamic content, reducing the TLS overhead and unwanted layer 3 and layer 4 traffic on APIs and web applications.

Identity Providers (IdP): Manage identity, authentication, and authorization services, often through integration with API gateway and management layers.

Review/Scanning: Scan existing APIs to identify vulnerabilities before release

When applied in a layered approach, you can protect your API more effectively:


How Tyk Helps Secure Your API

Tyk is an API management layer that offers a secure API gateway for your API and microservices. Tyk implements security such as:

  • Quotas and Rate Limiting to protect your APIs from abuse
  • Authentication using access tokens, HMAC request signing, JSON Web tokens, OpenID Connect, basic auth, LDAP, Social OAuth (e.g. GPlus, Twitter, Github) and legacy Basic Authentication providers
  • Policies and tiers to enforce tiered, metered access using powerful key policies

Carl Reid, Infrastructure Architect, Zen Internet found that Tyk was a good fit for their security needs:

“Tyk complements our OpenID Connect authentication platform, allowing us to set API access / rate limiting policies at an application or user level, and to flow through access tokens to our internal APIs.”

When asked why they chose Tyk instead of rolling their own API management and security layer, Carl mentioned that it helped them to focus on delivering value quickly:

“Zen have a heritage of purpose building these types of capabilities in house. However after considering whether this was the correct choice for API management and after discovering the capabilities of Tyk we decided ultimately against it. By adopting Tyk we enable our talent to focus their efforts on areas which add the most value and drive innovation which enhances Zen’s competitive advantage”

Find out more about how Tyk can help secure your API here.

What could be better? What needs to be added? Tell us, we can take it!

At Tyk, we are committed to helping people connect systems. We have some pretty great ideas on how that should happen. Tens of thousands of Tyk users agree. Maybe even you?

When it comes to our roadmap, we listen to the thousands of open source users, Pro license holders and cloud subscribers. We listen, build and release. Always getting better. Fast.

We believe we have the most transparent roadmap in the industry, its a trello board, it sits here.

We would love you to contribute to this, let us know what your priorities are. You can feed into it by contacting us through the forum, github, gitter or email.

So this post is aimed at you. We know you have an opinion, we know you think there is a better way of doing things. That’s why you use Tyk and not “Boring McBoringface’s Monolith API Stack” So whether you have Community, Pro, Enterprise or Cloud edition – tell us what you want from Tyk.

Delivering performance with version 1.7.1

We get compared to other API gateways a lot, and we haven’t published any benchmarks recently, so we thought we’d do a major optimisation drive and make sure that Tyk was really competitive and really performant, so we’ve released version 1.7.1 of the gateway, and it flies.

As usualy you can get v1.7.1 from our github releases page. It’s fully ocompatible with the latest dashboard (0.9.5), so you can literally do a drop-in replacement of the binary to get the improvements. Nothing else is needed.

We say there’s performance improvements, so what are they? Lets look back, in the last round of benchmarks we did, we found that tyk could handle about 400 requests per second before getting sweaty, these were done using various tools (Gattling and LoadRunner on various hardware, including a local VM!).

With version 1.7 we found that you could push that a bit higher to around 600rps with some smaller optimisations where we offloaded non-returning write operations off the main thread into goroutines.

Now with 1.7.1 we’ve made Tyk work well at over 1000 rps on cheaper hardware.


In our earlier benchmarks we used 4-core machines since Tyk is CPU bound, so the more cores, the better. But 4 cores are pricey and you don’t want to be running a fleet of them. So the benchmarks presented below were set up on a much smaller 2-core machine. You can find the full test setup detail at the bottom of this blog post.

In brief, the test setup consisted of three Digital Ocean servers in their $0.03 p/h price bracket, these are 2GB, 2-Core Ubuntu boxes, these suckers are cheaper than the dual core offerings at AWS, which is why we picked them (AWS t2.medium instances are actually pretty competitive, but their burstable CPU capacity actually causes locks when it kicks in and they are still 20 cents more expensive at $0.052 per hour, so they aren’t great for High Availability tests).

The test was performed using, in “rush” mode from 0 to 2000 concurrent connections. That totalled at about 1,890 requests per second at the end of the test.


Tyk 1.7.1 performance benchmarks

Tyk 1.7.1 performance benchmarks

The average latency of 20ms is down to the load generators operating out of Virginia (AWS) and the Digital Ocean servers living in New York. When we ran comparison benchmarks using AWS instances, we found the latency was around 6ms, so the overhead is network latency, not software generated.

As can be seen from the results, Tyk 1.7.1 performed well with an average 28ms latency overall, serving up 115,077 requests in 120 seconds with an average of 929 requests per second, that translates to about 82,855,440 hits/day

This is on a single node that costs 3 cents per hour. We’re pretty chuffed with that. We can see there’s a few errors and timeouts, but at a negligible level compared to the traffic volume hitting the machine.

This release marks a major performance and stability boost to the Tyk gateway which we’re incredibly proud to share with our users.

As always, get in touh with us in the comments or in our community portal.

Test Setup In Detail

The test setup involved three 2GB/2Core 40GB Digital Ocean ubuntu 14.04 instances and configured for high network traffic by increasing these limits:

Add fs.file-max=80000 to /etc/sysctl.conf

Added the following lines to /etc/security/limits.conf

* soft nproc 80000
* hard nproc 80000
* soft nofile 80000
* hard nofile 80000

Tyk gateway was installed on one server, Redis and MongoDB on another and Nginx on the third. Each was left with the bare bones setup except to ensure that they bound to a public interface so we could access them from outside.

Tyk was configured with the redis and mongoDB details and had the redis connection pool set to 2500, with the optimisations_use_async_session_write option enabled. Analytics were set to purge every 5 seconds. Tyk was simply acting as a proxy so the API definition was keyless.

Redis and NginX were left as is, however Digital Ocean have their own configurations for these so they are already optmised with recommended HA settings. If you are using vanilla repositories, then you may need to configure both Redis and NginX to handle more connections by default for a rush test.

Why we built Tyk

Two years ago we built Loadzen, it did OK and users wanted more features, next thing you know we were extending it all over the place.

Loadzen became a monolith. A year ago we refactored the application to be completely service based, and built a really shiny API for it which our frontend was AngularJS and backend was Tornado based. It was basically a single page webapp, and we wanted to eventually expose the API to a new CLI we had built.

The thing is, when it came to deciding on how to integrate the key and authorisation handling it meant adding a massive additional component to our application, some of these were:

  • Managing keys in Redis
  • Machinery for revoking and re-validating keys
  • Keeping bearer tokens (for the webapp) and API tokens separate, and managing different expiry rates
  • Rate limiting and quotas, we didn’t want to be flooded

The list goes on, it turned out shoehorning all this functionality into our existing authentication and security infrastructure was hard, and writing rate-limiting code just seemed ridiculous for our needs. that’s when we came upon the idea of using an API Gateway.

Now there’s plenty out there, including paid ones – and we briefly considered using 3Scale, but what put us off all of these was that we simply wanted a component we could plug into our architecture with minimal fuss, and that would use our existing security mechanisms with a minimum of rewrites.

That’s how Tyk came about, and we loved Golang, it seemed like the perfect fit – it was fast, had an excellent standard library, made it easy to write service-level components.

We dogfooded the Tyk core gateway on Loadzen for 3-4 months to make sure it wasn’t leaking memory or eating up resources (we never launched the API, but we had Tyk handle all web-app based requests and bearer tokens), it performed admirably. So we decided to extend it a bit.

That’s how we built Tyk, and why we built it. We made a conscious decision to make the core open source, nobody installs network-level components anymore if they can’t look at the source. We also wanted to make sure that the project grew and grew – this is our first work with Golang, and we think that with some community support and early adopters, tyk can become a really useful tool in the system and developers arsenal.

Scroll to top