Tyk News

All the latest news & updates on the Tyk API Management Platform

API Eventing Is The Next Big Opportunity For API Providers

For the last decade, modern web APIs have grown from solutions like Flickr, to robust platforms that generate new business models. Throughout this period of growth, most APIs have been limited to request-response over HTTP. We are now seeing a move back to eventing with the popularity of webhooks to connect SaaS solutions, the introduction of technologies such as Kafka to drive internal messaging, and the need for integrating IoT devices.

API eventing completely changes the way API consumers interact with our APIs, creating new possibilities that request-response cannot. Let’s examine the driving factors contributing to the rise of API eventing in greater detail, along with the opportunities that may inspire you to consider adding API event support to your API.

Why Should Your APIs Support Events?

Reason #1: API Events Drive Innovation

With the introduction of WebHooks into the GitHub platform, software development changed dramatically. Teams were no longer required to explicitly start the build process by clicking a button. The idea of generating daily or hourly builds was a thing of the past. Instead, teams could kickstart the build process whenever new code was pushed to GitHub. Post-commit hooks have always been part of svn and git, but GitHub extended these event streams across the web. Combined with cloud vendor APIs, WebHooks enabled teams to build and deploy their code to any environment of their choosing, all driven by WebHooks.

Reason #2: API Events Enable Collaboration

Messaging platforms such as HipChat and Slack have changed the way team members collaborate. These team messaging platforms have opened up opportunities to integrate bots and command-line automation. The result is a new way to communicate that goes beyond traditional IRC and group chat. To make this work, these platforms offer a combination of request-response APIs and realtime event streaming to enable external apps, bots, and APIs to be integrated seamlessly into their platform.

Reason #3: API Events Drive Codeless Integration

Software-as-a-Service (SaaS) products are increasingly offering APIs for integration. Integration Platform as a Service (iPaaS) tools such as Zapier and IFTTT help connect them together to automate many common tasks without requiring copy-and-paste. Many iPaaS offerings even host the code on your behalf, removing the need to manage servers and infrastructure.

While this kind of integration does not always require writing code, it is limited by the types of triggers offered by an API. When an API only offers request/response support, clients are required to poll to see if there are any changes to important data. With API eventing, these tools can receive a trigger (the event) when data changes and execute the desired automation flow.

Reason #4: API Events Create Architectural Flexibility

More and more teams are exploring microservice architecture as a way to place boundaries around complex solutions, reducing the cognitive load required to understand a portion of the overall solution. All of this is done with the goal of speeding up delivery by being able to create smaller, independent teams that are able to deliver capabilities rapidly. As a result of a loosely coupled microservice architecture, services emit events to inform other microservices of data changes and key business events that drive business workflows.

Additionally, we are seeing the rise of function-as-a-service (FaaS) within the serverless world. Rather than deploying a complete application, smaller functions are deployed and then triggered through message-based events or through API gateways that provide a request/response style invocation.

Bear in mind that messaging brokers such as Kafka, RabbitMQ, or Amazon SNS/SQS are often used to drive microservice events and trigger function-based services. While these are valid solutions for your internal messaging, they are not designed for externalization. If you externalize events, you should consider how your consumers will need to consume them, perhaps with a combination of webhooks, event streaming, or perhaps the less efficient long-polling for some circumstances.

Reason #5: Events Are The Glue For IoT Devices and Edge Computing

Perhaps you have some smart devices around your home or office. These devices often talk to a cloud service to enable visualization of important data and events from the web or a mobile device. Integrating with IoT services via cloud-based APIs offered by vendors benefit from event streaming, as third-party automation solutions can extend the usefulness of these devices.

For some device integration scenarios, network connectivity to cloud resources may not be guaranteed or the amount of data produced may require evaluation and aggregation before being sent to the cloud. This is called ‘edge computing’ and has been commonplace for many years, particularly in manufacturing and the energy industry where high bandwidth isn’t always available.

There are now signs that edge computing will also start to emerge as part of the next generation IoT devices. Event streaming is required for edge devices to integrate with each other and with smart controller devices on local networks. If you are building APIs for IoT, event streaming will be essential to edge communication and computation.

Should Your API Offer Event Subscriptions?

As I have written previously, API events expand the kinds of conversations that our APIs need to have as we use them to solve day-to-day problems. I encourage teams to consider how their API conversations can be enriched through the addition of API events. Without eventing support, APIs are simply left to wait for you to ask them something. With eventing support added, APIs are now able to have a two-way conversation with other APIs and applications. This produces a better user experience and greatly expands your API’s adoption and reduces consumer churn.

New WordPress API Portal plugin

We love it when a community comes together!

You can now use a simple, open-source, WordPress plugin to put the functionality of the Tyk API developer portal into any WordPress site. This innovation has been made possible by Liip, active contributors to the Tyk open-source community. The plugin brings together WordPress and Tyk to enable users to build “best of breed” API Developer portals that combine the API portal functionality of Tyk, with the CMS capability of WordPress.

Liip are one of Switzerland’s leading developers of web applications. They have developed and open-sourced this great new WP plugin to extend the possibilities around using the Tyk Developer portal. Thanks guys! The plugin is ready to use. Just follow the links below. It’s already used in production by the Swiss Federal Railways (SBB-CFF-FFS) and we can’t wait to see what you are going to do with it next!


  • Automatic developer registration on Tyk when developers sign up in WP
  • Configuration of API policies available for token registration
  • Developers may request an access token for the available API policies
  • Automatic or manual approval of key requests
  • Storage of token (references) by name and API policy
  • Revoking of tokens by developer
  • Display usage statistics per key (see screenshot)
  • Request quota usage per key (see screenshot)

What this plugin does not offer:

  • Management of Tyk API Gateway (the Tyk Dashboard is best suited for that)
  • WP user registration (there are enough plugins that do that quite well!)

2 easy ways to get the plugin:

  1. The public repo can be found on GitHub here.
  2. The plugin is also published on wordpress.org wordpress.org (although it is slightly behind the status on GitHub).

What future innovations would you like to see in Tyk? Let us know at community.tyk.io

When and How Do You Version Your API?

One of the most frequent questions I receive during API training and coaching engagements involves versioning. When to version. How to version. Whether to version at all. While not all APIs are exactly the same, I have found that there are certain patterns and practices that work for most team. I have pulled these together to provide a recommendation for a versioning strategy that will help most API providers – whether they are deploying internal, private APIs or public APIs outside the organization.

Do you really need to version your API?

APIs are contracts established between you and your API consumers. Ideally, you will never have to break this contract. This includes URI patterns, payload structures, field and parameter names, expected behavior, and everything else in between. The biggest benefit of this approach is obvious: An API consumer’s understanding never expires. Applications continue to work, making your consumers happy.

However, that may not be reality. There may be times when you need to make a breaking change. When this happens, you need to ensure that you never do anything that will cause your API consumers to fix code.

Breaking vs. non-breaking changes

Non-breaking changes tend to be additive: adding new fields or nested resources to your resource representations, adding new endpoints such as a PUT or PATCH that was previously unavailable. API consumers should build client code that is resilient to these kinds of non-breaking changes.

Breaking changes include:

  1. Renaming fields and/or resource paths – often for clarity after your API is released
  2. Changing payload structures to accommodate the following: renaming or removing fields (even if they are considered optional – contracts!); changing fields from a single value to a one-to-many relationship (e.g. moving from one email address per account to a list of email addresses for an account)
  3. Fixing poor choices of HTTP verbs, response codes, or inconsistent design across your API endpoints

In short, once you release your API into the wild, you have to live with it. If you encounter one or more of the items above, it may be time to version your API to prevent breaking your existing API consumers.

Defining Your API versioning strategy

Any evolving, growing API will require an API versioning strategy. When and how you version may vary based on the expectations of your API consumers. I generally recommend the following API versioning strategy as part of an overall API governance model:

  1. If your API is in an early preview release, perhaps to gain feedback from consumers, establish proper expectations that your API may change. At this stage, you will remain at version 1 for some time but your API design may change. Things are volatile as a consumer, so they should expect that changes may occur
  2. Once released, your API should be considered a contract and cannot be broken without a new version release
  3. API versions are major.minor, following the general principles of semantic versioning
  4. Non-breaking changes result in a bump in the minor version; clients are automatically migrated to the latest verison and should not experience any negative side-effects
  5. Breaking changes result in a new major version; clients must specifically migrate to this new version as it contains one or more breaking changes. You must establish an appropriate timeline and regular communication with your API consumers to ensure that they migrate to the new version. In some cases, this may not be possible and your team will be required to support the previous API version indefinitely

How to implement API versioning

Once you determine that you need a new version of your API, you need to decide how to handle it. Preferrably, you have decided ahead of time and encouraged API consumers to request version 1 of your API. There are three common approaches to implement API versioning:

  1. Resource versioning: the version is part of the Accept header in the HTTP request. e.g. Accept: application/vnd.github.v3+json is sent to GET /customers. This considered the preferred form of versioning by many, as the resource representations are versioned while keeping resource URIs the same. Some APIs choose to provide the latest version as the default, if not provided in the Accept header

  2. URI versioning: the version is part of the URI, either as a prefix or suffix. e.g. /v1/customers or /customers/v1. While URI-versioning isn’t as pure as content-based versioning, it tends to be the most common as it works across a variety of tools that may not support customized headers. The downside is that resource URIs change with each new version, which some consider counter to the intent of having a URI that never changes.

  3. Hostname versioning: the version is part of the hostname rather than the URI. e.g. https://v2.api.myapp.com/customers. This approach is used when technology limitations prevent routing to the proper backend version of the API based on the URI or Accept header.

No matter which option you choose, API versions should only include the major number. Minor numbers should not be required (e.g. /v1/customers, not /v1.1/customers).

Final thoughts

Remember, APIs are contracts with your consumers. Break your contract and a new version is required. Choose a strategy, have a plan, and communicate that plan with your API consumers. They will thank you for it.

Tyk API Gateway v2.3.2 and Tyk Dashboard released

We are happy to announce a new version of Tyk Gateway and Tyk Dashboard.

This is a bug fix release and contains critical updates for our Tyk Hybrid users, as well as various fixes for all the users.

Tyk Gateway

  • Fixed memory and connection leak affecting our Hybrid users, it was causing Tyk Gateway to crash due to lack of available memory, or because of opening too many network connections to Tyk Cloud.

  • Now you can allow double slash “//” in your urls, by setting http_server_options.skip_url_cleaning option. It may be useful if you need to pass “urls” as parameters to your api, for example: “http://your.api.com/get/http://example.com”.

  • Fixed bug where JWT claims would not be included in the middleware context in subsequent requests

  • Fixed runtime panic when an OAuth client is added with an API that does not exist in the gateway yet

Tyk Dashboard

  • Added Organisation name to dashboard UI for multi-tenant installations

  • Fixed ‘Search by key” in the key analytics view

  • Fixed API Import schema to work with api_model field

  • Fixed import/export API for policies where the ACL would not be properly set on import

  • Fixed uptime tests UI issues for requests with multi-line bodies

Both releases are available via our package cloud repositories and as our official docker images.

API Developer Portals

Increasing API Adoption Through Developer Portals

Effective communication is a critical factor for API adoption. Since APIs do not have a user interface, your documentation is the primary method for communicating with developers on how to use your API. Your API documentation is your API’s user interface.

A developer portal helps bring together the different styles of communication that you need to ensure that APIs can be found, speak to the benefits of using your API, and guide developers on how to integrate your API.

The Value of API Documentation

API documentation is the primary communication medium between the API provider and consumer. Unless the API is open source, you will likely never see the source code behind it. Therefore, the only thing that developers consuming your API have is your documentation. Without clear and complete documentation, developers will struggle to use your API.

We use the term API documentation as if there is only one kind of documentation. Yes, you need to deliver a great API reference for developers. Tools such as Swagger, RAML, and Blueprint are just a few of the formats available to help build them. However, complete API documentation requires more than just your API reference in HTML or PDF form. It requires having a developer portal that pulls together everything that they will need to be successful.

What Makes a Great Developer Portal?

Every great developer portal includes the following content:

Features and Discovery – Provides an overview of the API, addressing concerns such as benefits, capabilities, and pricing of your API to qualify prospects.

Case Studies and Examples – Case studies highlight applications that have been built using your API.

Reference Docs – Provides a reference for each API endpoint to developers, including details on the URL, HTTP verb(s) supported, response codes, and data formats. This is where Swagger, RAML, and Blueprint formats are used to generate documentation from an API definition.

Guides and Concepts – As an API consumer, the most difficult part of using an API is the initial learning curve. Guides offer help with learning an API’s concepts and vocabulary during this critical stage.

Problem/Resolution – Documentation can help developers troubleshoot error response codes and ease the burden on your developers and support staff.

Changelog – Shows what has been added or improved recently, helping developers to find new ways to use your API.

Going Beyond Content

Beyond content, developer portals should include the following disciplines:

Easy Onboarding – APIs rarely gain adoption if you make it difficult to get started. Easy onboarding, from self-registration to a guided tour will help developers overcome the challenges to adopting a new API.

Operational Status – Is your API available or temporarily down? A simple status page that reflects your API’s availability will help to inform developers and operations staff that see increased errors in their applications.

Live Support – Including a chat solution, whether embedded into your developer portal or through communication platforms such as Slack will help provide direct access to those that can help resolve integration

How Secure Is Your API?


You have researched the latest API design techniques. You have found the best framework to help you build it. You have all the latest tools in testing and debugging at your fingertips. Perhaps you even have an amazing developer portal setup. But, is your API protected against the common attack vectors?

Recent security breaches have involved APIs, giving anyone building out APIs to power their mobile apps, partner integrations, and SaaS products pause. By applying proper security practices and multiple layers of security, our API can be better protected.

Recent API Security Concerns

There have been several API security breaches that demonstrate some of the key vulnerabilities that can occur when using APIs. This includes:

These and other recent cases are causing API providers to pause and reassess their API security approach.

Essential API Security Features

Let’s first examine the essential security practices to protect your API:

Rate Limiting: Restricts API request thresholds, typically based on IP, API tokens, or more granular factors; prevents traffic spikes from negatively impacting API performance across consumers. Also prevents denial-of-service attacks, either malicious or unintentional due to developer error.

Protocol: Parameter filtering to block credentials and PII information from being leaked; blocking endpoints from unsupported HTTP verbs.

Session: Proper cross-origin resource sharing (CORS) to allow or deny API access based on the originating client; prevents cross site request forgery (CSRF) often used to hijack authorized sessions.

Cryptography: Encryption in motion and at rest to prevent unauthorized access to data.

Messaging: Input validation to prevent submitting invalid data or protected fields; parser attack prevention such as XML entity parser exploits; SQL and JavaScript injection attacks sent via requests to gain access to unauthorized data.

Taking a Layered Approach to Security

As an API provider, you may look at the list above and wonder how much additional code you’ll need to write to secure your APIs. Fortunately, there are some solutions that can protect your API from incoming requests across these various attack vectors – with little-to-no change to your code in most circumstances:

API Gateway: Externalizes internal services; transforms protocols, typically into web APIs using JSON and/or XML. May offer basic security options through token-based authentication and minimal rate limiting options. Typically does not address customer-specific, external API concerns necessary to support subscription levels and more advanced rate limiting.

API Management: API lifecycle management, including publishing, monitoring, protecting, analyzing, monetizing, and community engagement. Some API management solutions also include an API gateway.

Web Application Firewall (WAF): Protects applications and APIs from network threats, including Denial-of-Service (DoS) attacksand common scripting/injection attacks. Some API management layers include WAF capabilities, but may still require a WAF to be installed to protect from specific attack vectors.

Anti-Farming/Bot Security: Protect data from being aggressively scraped by detecting patterns from one or more IP addresses.

Content Delivery Network (CDN): Distribute cached content to the edge of the Internet, reducing load on origin servers while protecting them from Distributed Denial-of-Service (DDoS) attacks. Some CDN vendors will also act as a proxy for dynamic content, reducing the TLS overhead and unwanted layer 3 and layer 4 traffic on APIs and web applications.

Identity Providers (IdP): Manage identity, authentication, and authorization services, often through integration with API gateway and management layers.

Review/Scanning: Scan existing APIs to identify vulnerabilities before release

When applied in a layered approach, you can protect your API more effectively:


How Tyk Helps Secure Your API

Tyk is an API management layer that offers a secure API gateway for your API and microservices. Tyk implements security such as:

  • Quotas and Rate Limiting to protect your APIs from abuse
  • Authentication using access tokens, HMAC request signing, JSON Web tokens, OpenID Connect, basic auth, LDAP, Social OAuth (e.g. GPlus, Twitter, Github) and legacy Basic Authentication providers
  • Policies and tiers to enforce tiered, metered access using powerful key policies

Carl Reid, Infrastructure Architect, Zen Internet found that Tyk was a good fit for their security needs:

“Tyk complements our OpenID Connect authentication platform, allowing us to set API access / rate limiting policies at an application or user level, and to flow through access tokens to our internal APIs.”

When asked why they chose Tyk instead of rolling their own API management and security layer, Carl mentioned that it helped them to focus on delivering value quickly:

“Zen have a heritage of purpose building these types of capabilities in house. However after considering whether this was the correct choice for API management and after discovering the capabilities of Tyk we decided ultimately against it. By adopting Tyk we enable our talent to focus their efforts on areas which add the most value and drive innovation which enhances Zen’s competitive advantage”

Find out more about how Tyk can help secure your API here.

How APIs Are Creating the Composable Enterprise

James Higginbotham recently spoke at APIStrat 2016 in Boston, where he shared some insights into how enterprises are using APIs and microservices as part of their digital transformation processes. What does this mean for software architects in today’s enterprise? Let’s first look at the transformations underway toward a composable, modular enterprise. Then we will consider some of the impacts it will have on our day-to-day efforts as software architects.

What is the Composable Enterprise?

The composable enterprise is one that strives to capture business and technical capabilities as APIs, seen as modular components across lines of business. Apps and integrations are built upon these APIs by the enterprise for internal, partner, or public use. A composable enterprise combines commercial off the shelf (COTS) packages, SaaS platforms, and custom development to address market needs quickly.

Coming from a software background, I tend to think of it more like a modular enterprise with APIs that can be combined to create new and interesting solutions:

Capital One is one enterprise moving in this direction by releasing a few public productized APIs, with the rest remaining for private or partner consumption.

Moving Toward an API-Centric Architecture

Within the enterprise, APIs are traditionally viewed as bolt-on solutions. They connect internal systems together, or offer enterprise data to remote mobile or web apps. However, reuse isn’t the goal of an integration API, resulting in one-off APIs scattered across the organization. In a composable enterprise, APIs are designed first, becoming the outward-facing contract that hides all of the internal details:

Legacy systems can then be transitioned to a new architecture or replaced with a new solution over time. Whether you are wrapping a commercial product, monolithic application, or a microservice architecture – it doesn’t matter.

APIs Should Focus on Capabilities

APIs in a composable enterprise focus on delivering business and technical capabilities. This requires that we shift our thinking from databases and code to helping people achieve their goals. Users don’t care how we organize our database schema, what programming languages we use, and what the current hot server or client-side framework may be. They want to get things done and move on. As architects, we need to focus on both the business and technology – APIs are the key deliverable that is shared between them. Our API designs should focus on delivering desired outcomes, not just data.

Manage Your API as a Product

The traditional view of APIs results in APIs being treated the same as internal code, with limited access or visibility. Applying product-thinking to APIs requires a shift toward developer portals, self on-boarding of developers, and a customer-driven approach. Most importantly, architects must monitor API usage, making adjustments to the API product based on consumption patterns.

At the center of a product-driven architecture is an API gateway and management layer. The gateway routes incoming API requests to internal processes or microservices. The management layer provides security, role-based enforcement, rate limiting, and usage metrics. A variety of commercial and open source solutions exist to fill this gap. Tyk is an open source solution that offers a cloud, hybrid or on-premise option to solving this problem.

Want to Learn More About the Composable Enterprise?

You can view the slide deck from the presentation, available via Slideshare.

Tyk API Gateway v2.3 released

Sweet Shyamalamadingdong! Tyk v2.3 is out!

If API Gateways were professional wrestlers, Tyk would probably be a Luchador… she would use different stage names: Pirata Rosa, Presidente Misterioso, or La Fenix Mayor, and each persona would instill quivering anticipation with onlookers: “No! Not Pirata Rosa!” they would whisper in their lunch break, “I heard she made El Ruso Picante eat his own mask, after she kicked his ass with a rubber chicken….”

“This one time, La Fenix Mayor strolled into the ring,” they said in hushed tones, “she took one of the fans of El Coyote Magico, and then beat him to death with his own feet!”, tickets to her shows would sell in the millions, her fans would be adoring, her fame would spread across the world and she would be so renowned that the Mayor of Funkytown would need to erect a statue to her honor… Tyk the API Gateway, the Luchador-of-Luchadores, the Beater-of-things, oh it would be magical…

Ok, so maybe I’m going totally off-topic here… we’re here to talk about the newest, shiniest, most kick-ass-iest API Gateway update in the history of API Gateway updates.

So what’s new pussycat? Let. Me. Tell. You:

Performance, performance, performance, performance… wait, did I mention performance? Performance.

That’s right: Tyk v2.3 can handle more than twice the traffic that v2.2 could, on top of that, it’s more efficient in how it handles Redis, and it’s more clever in how it manages its traffic.

The latest incarnation of Tyk comes with a distributed rate limiter. This new module means that Tyk no longer needs to synchronise rate limits across a cluster via Redis, and instead does so by ongoing inter-cluster chatter and a distributed in-memory token-based limiter that is eventually consistent, instead of hard-synchronised.

The new rate limiter is also more forgiving, if you set a limit of 1 request per second, then one request per second will always get through, whereas previously the client would need to implement some kind of back-off strategy.

This also means an overall reduction in the amount of Redis traffic coming from the Gateway that ensures that the Gateways scale effectively as CPU cores increase.

And all of that culminates in a really cool, auto-scaling system that can handle a bunch of traffic, efficiently and effectively.

You want to extend Tyk? Have I got some news for you…

So if Tyk were a wrestler, then if she were to use a weapon outside of the ring (as you do), unlike most other wrestlers that are limited to using only one tool, usually a poor spectator’s chair. Tyk – or La Fenix Mayor!, to use her nom-de-souffrance – will use the spectator’s chair, the spectator’s backpack, the spectator’s handkerchief, or even the poor spectator themselves to beat her opponent to a whimpering mess.

(That’s quite a strained metaphor, I’ll admit.)

What we’re saying is that in Tyk v2.3 you can add pre-processing, post-processing, and custom authentication middleware in more than just one host language. With Tyk, you can add plugins using… clears throat, drumroll:

That’s right, if you want to add a custom authentication server to Tyk and you want it to be fast, you can use any of the above to extend how Tyk works.

The new plugins feature means that on top of all the native goodies that come with Tyk out of the box, extending it now fits the needs of your team and not what we (or anyone else) impose on you. It’s all about flexibility and performance, and here, we perform very well.


While Tyk v2.3 can easily handle 2,000 requests on commodity hardware, when you start introducing plugins and host languages you can see a bit of a performance hit, but that performance hit, in Tyk, is designed to still be plenty fast. In our benchmarking, Python 3 plugins performed at about 1,300 requests per second with sub-100ms latency, while gRPC with a local gRPC host gave us 1,400 requests per second with sub-100ms latency.

This is important, so I’m going to say it again: You can extend Tyk with almost any language you chose, and get blazing-fast performance to boot.

Pre-built plugins

Now we’re not saying “we have all this cool new stuff for you but you need to do it yourself”. Hell no! We always like to add value – just look at all the cool features that come built into Tyk natively. No, we like to go further, to go deeper. With this release, we’ve also been hard at work fashioning some initial plugins that are 1, pretty damned useful, and 2, a great starting point for setting up your own.

Some of the more awesome plugins that we’ve put together are:

  • Loggly integration
  • Datadog integration
  • Bot detection
  • Message Queue integration (AMQP amongst other message queue hand-offs)
  • Webhook sidecar
  • Correlation IDs for request logging

We’ve also generated demo’s to get you started in your favourite languages for:

And we’ll keep going…

Is this the future of Tyk?

In a word: No. Unlike other vendors, we like to pack value into our offering, and we don’t want to take the chicken-sh*t way out and force our customers to maintain their own plugins, or forks of ours in order to add functionality to Tyk. We don’t want you to increase your technical debt by having to maintain customisations to Tyk over huge swathes of time.

So our plan is to continue adding great functionality to the Tyk OSS core, and even migrating popular plugins into the core so that they are available to all “at the speed of Go”, while giving our users the flexibility and capability to make it easy to integrate Tyk with your systems.

In future, we may opt for creating plugins for Tyk in Go itself, at which point things may change, but for now: Tyk is there to make sure that you can get your sh*t done and get it done well.

Other cool stuff

So the above two features are pretty much the biggest, baddest announcements that come with Tyk wading into the ring with v2.3, but we have also been busy fixing up bugs and adding smaller, nice-to-have features to the overall system, I’ve listed a few below because they are pretty cool, and it’s nicer to read them here rather than parsing our changelog:

Environment variable configuration

This is a dull, but very useful change – you can now configure all of the Tyk Gateway, Analytics and Pump settings via environment variables – more precisely – you can override them. This is extremely useful for those of you deploying into Docker orchestration environments such as Kubernetes where files aren’t fun to deploy.

Live Gateway logs in the Dashboard

A lot of the debug information in Tyk comes from the Gateway log (stderr) output, so we’ve made it so that key log operations are also copied through to a live Dashboard view – this way you can quickly check why an API hasn’t loaded or if a system event or error has occurred without reaching for the raw logs from the Gateway hosts.

Custom error templates per response code in XML or JSON

This is a popular request, you can override all of our error templates, per-status-code, in either XML or JSON, this means you can have much more control over how your API Gateway responds to end users when errors occur.

Chained authentication methods

A popular request from the community – having the capability to have HMAC-based message signing in conjunction with a bearer token. With this mechanism, you can actually use many different authentication methods chained together in order to provide maximum security for your applications.

Hot reconfigure and reload of the Tyk Gateway process from the Dashboard

Tyk has always been able to hot-reload from the command line, but we’ve made the process smoother and more efficient, not only can you send a signal to the process to fork and start a new process without dropping any connectionss, but you can also hot reload from the Dashboard and also re-configure the Gateway in the process without having to set foot on the host.

(Naturally, this feature can be disabled!)

A new CLI tool to help you build and deploy plugins

Having so many plugin options means we needed a better way to publish your code to the Gateway layer, and to make that work, we’ve started work on a CLI tool, this tool currently can only do one thing: sign and package up your plugins so that they are cryptographically secure and guaranteed before being deployed (and verified) by the Gateway.

But this project will keep growing, and more and more functionality will be added to the CLI to make it easy to script common Gateway operations.

More logger integrations

Tyk v2.3 now has logger integrations for:

So that you can aggregate your Gateway logs into the system of your choice.

Separate Redis cache

For those using our caching mechanism, in very high availability environments you do not want to have your Redis cache be the same database that is used by Tyk for configuration information. With this update it is possible to completely separate out the cache to a different Redis database or cluster.

More portability: Import/Export API

It is now possible to backup and re-create your Organisations, Policies and APIs using a dedicated import/export API. This will allow you to completely re-generate an installation from backed up assets without worrying about mis-attributed IDs.

LetsEncrypt support

A fun, and currently experimental feature, the Tyk Gateway can now auto-provision SSL certificates for your domain-bound APIs so you do not need to configure them yourself. All certificate information is encrypted and shared across a cluster so that subsequent visits to your Gateways are fast and scaleable.

That’s all folks

Until we meet again – we’re already planning v2.4 and have some very cool stuff in the pipeline for you. As always, get in touch with us on the community forum, or directly via Twitter, to give us feedback or ask any questions.

For those of you on v2.2, we have created some upgrade notes.

Now… back to the ring.

Martin and the team at Tyk Towers in Shoreditch, London.


SDK Patterns For Accelerating API Integration

Here at Tyk we find that everyone has a different opinion when it comes to SDKs: Some think they are the best thing since sliced bread, whilst others think they suck and are a waste of time. The truth is usually somewhere in between.

James Higginbotham was at API Days last month – Let’s hear what he found out about the latest SDK trends.


I recently attended APIStrat 2016 in Boston, where there was plenty of discussion around the practices of providing and consuming APIs. One thing that was notable was the reemergence of SDKs for APIs. Rather than in previous years, the discussion wasn’t on SDK vs. no-SDK. Instead, the discussion was on the various ways we could empower API consumers to quickly integrate API providers into their solutions. Out of these discussions came 4 patterns on how API providers can offer SDKs to help their API consumers integrate quickly. Let’s revisit what an SDK is, and then examine these 4 patterns to see what may be the best fit for your API.

What is an SDK?

A Software Developer Kit, or SDK, is a packaged solution that includes code for developers that wish to interact with a web-based API. SDKs target a specific programming language or runtime platform, such as Java/the JVM, Ruby, JavaScript, PHP, Python, Perl, or Golang.

SDKs speed the integration process between a server-side or client-side/mobile application and the web API. They often include one or more of the following:

  • A client library for the specific programming language, removing the need to deal with the lower-level details of HTTP request/response
  • SDK documentation that describe how to use the client library
  • Example scripts and/or full applications that demonstrate SDK usage
  • Administration/CLI scripts for interacting with the API from the command-line to prevent the need for writing code for common administrative functions

Are SDKs the same as helper libraries?

Traditionally, SDKs include more than client libraries, as noted above. Some API providers offer helper libraries, which simply offer a language-specific distribution of a client library. All other resources often found in an SDK are hosted on their website for reference.

While there is a distinct difference between an SDK and helper library, many (like myself) tend to use the term interchangeably. The important thing is to be clear about what is provided in the distribution to set proper expectations with the developer. Providing a clear README file inside the distribution with links to additional documentation and resources is also a good practice, as it will help developers get started quickly.

SDK Patterns for API Providers

Now that we understand what SDKs are, we need to determine who will build and maintain them. There are four distinct patterns for providing SDKs:

Provider supported: Vendor supported SDKs are built and maintained by the API provider. They own them, manage them, and keep them in sync as API endpoints are added or enhanced.

Community contributed: Instead of the vendor offering the SDK, the community contributes the SDK – often through Github or similar. This may be the case for all SDKs, or just for less popular programming languages that the vendor has not offered. Vendors may choose to allow the community-contributed SDKs to thrive on their own, work with the author(s) to make them better, or eventually offer to take over maintenance of the SDKs. Be aware that community-contributed SDKs have a tendency to lose interest or available maintainers over time and may become abandoned. Communication with community supporters is critical, as many developers may assume they are vendor-backed and complain if they are no longer maintained.

Consumer-driven: With the growth of API definition formats such as Swagger, RAML, Blueprint, and others, it is becoming easier for API consumers to generate their own client library from any of these formats. This gives the consumer the most flexibility, as they may opt to create a lightweight wrapper around the HTTP layer, or perhaps generate a robust library with objects/structures that mimic API resources.

HTTP is the SDK: Those familiar with HTTP generally prefer working with it directly rather than an SDK. SDKs often hide the lower-level details of HTTP and may prevent tuning API consumption to fit the exact needs of the use case. Offering examples using cURL and in popular programming languages can help get them started, without the overhead of needing to learn a brand new SDK library.

Which Approach Is Right For Your API?

Some have made the case that SDKs create more challenges for both API providers and consumers, preferring in most cases to simply offer well-documented APIs for developers to make HTTP requests. However, there are times when it makes sense. This is especially the case in the mobile space, where developers are accustomed to installing an SDK and coding against it rather than composing raw HTTP requests and handling the responses themselves.

Understanding your API consumer audience is the best way to make a decision. When in doubt, start with great documentation and using an API definition language such as Swagger, RAML, or Blueprint. Consumer-driven SDKs enable developers to work directly with HTTP, then generate their own SDK as desired. You can then begin to offer SDKs for specific programming languages when your team is ready to fully support them.

Why Do Microservices Need an API Gateway?

Sometimes everything depends on a powerful gateway. Covering security, control and the power of transforms, James Higginbotham explores the ways microservice architectures can benefit from an API Gateway.


With the growth of API as a product, as well as API-centric IT initiatives, API gateways and management layers are becoming more common place. But, should we consider an API gateway for our microservices as well? If so, what kind of benefits do they offer?

What is an API Gateway?

An API gateway provides a single, unified API entry point across one or more internal APIs. They typically layer rate limiting and security as well. An API management layer, such as Tyk.io, adds additional capabilities such as analytics, monetisation, and lifecycle management.

A microservice-based architecture may have from 10 to 100 or more services. An API gateway can help provide a unified entry point for external consumers, independent of the number and composition of internal microservices.

The Benefits of an API Gateway For Microservices

Prevents exposing internal concerns to external clients. An API gateway separates external public APIs From internal microservice APIs, allowing for microservices to be added and boundaries changed. The result is the ability to refactor and right-size microservices over time, without negatively impacting externally-bound clients. It also hides service discovery and versioning details from the client by providing a single point of entry for all of your microservices.

Adds an additional layer of security to your microservices. API gateways help to prevent malicious attacks by providing an additional layer of protection from attack vectors such as SQL Injection, XML Parser exploits, and denial-of-service (DoS) attacks.

Enables support for mixing communication protocols. While external-facing APIs commonly offer an HTTP or REST-based API, internal microservices may benefit from using different communication protocols. Protocols may include ProtoBuf, AMQP, or perhaps system integration with SOAP, JSON-RPC, or XML-RPC. An API gateway can provide an external, unified REST-based API across these various protocols, allowing teams to choose what best fits the internal architecture.

Decreased microservice complexity. Microservices have common concerns, such as: authorization using API tokens, access control enforcement, and rate limiting. Each of these concerns can add more time to the development of microservices by requiring that each service implement them. An API gateway will remove these concerns from your code, allowing your microservices to focus on the task at hand.

Microservice Mocking and Virtualization. By separating microservice APIs from the external API, you can mock or virtualize your services to validate design requirements or assist in integration testing.

The Drawbacks of an Microservice API Gateway

While there are many benefits to using an API microservice gateway, there are some downsides:

  • Your deployment architecture will require more orchestration and management with the addition of an API gateway
  • Configuration of the routing logic must be managed during deployment, to ensure proper routing from the external API to the proper microservice
  • Unless properly architected for high availability and scale, an API gateway can become a limiting factor and even a single point of failure

Using Tyk For Your Microservice Gateway

Rather than providing an explanation of Tyk’s features as a microservice API gateway, I’ll let Dave Koston, VP Engineering at Help.com explain how they use Tyk:

“We use Tyk as a gateway in front of around 15 services (of varying sizes). We’re also using Tyk Identity Broker to proxy logins to our existing authentication service. Tyk gives us some really great features out of the box like rate limiting, sessions, token policies, and visibility into api traffic.”

In addition, Dave mentioned that Tyk helps them secure their web socket connections, in addition to their API:

“We also have web socket communication that requires authentication and it was easy to simply add some metadata to Tyk sessions and use the Tyk session store (Redis in our case) to authenticate those web socket connections with the same access token that we use for HTTP.”

To learn more about Tyk and how it can provide an API gateway for your microservices, along with API management of your public API, take a look at our product page.

Scroll to top