Category: Tyk API Gateway

Tyk API Gateway v2.3 released

Sweet Shyamalamadingdong! Tyk v2.3 is out!

If API Gateways were professional wrestlers, Tyk would probably be a Luchador… she would use different stage names: Pirata Rosa, Presidente Misterioso, or La Fenix Mayor, and each persona would instill quivering anticipation with onlookers: “No! Not Pirata Rosa!” they would whisper in their lunch break, “I heard she made El Ruso Picante eat his own mask, after she kicked his ass with a rubber chicken….”

“This one time, La Fenix Mayor strolled into the ring,” they said in hushed tones, “she took one of the fans of El Coyote Magico, and then beat him to death with his own feet!”, tickets to her shows would sell in the millions, her fans would be adoring, her fame would spread across the world and she would be so renowned that the Mayor of Funkytown would need to erect a statue to her honor… Tyk the API Gateway, the Luchador-of-Luchadores, the Beater-of-things, oh it would be magical…

Ok, so maybe I’m going totally off-topic here… we’re here to talk about the newest, shiniest, most kick-ass-iest API Gateway update in the history of API Gateway updates.

So what’s new pussycat? Let. Me. Tell. You:

Performance, performance, performance, performance… wait, did I mention performance? Performance.

That’s right: Tyk v2.3 can handle more than twice the traffic that v2.2 could, on top of that, it’s more efficient in how it handles Redis, and it’s more clever in how it manages its traffic.

The latest incarnation of Tyk comes with a distributed rate limiter. This new module means that Tyk no longer needs to synchronise rate limits across a cluster via Redis, and instead does so by ongoing inter-cluster chatter and a distributed in-memory token-based limiter that is eventually consistent, instead of hard-synchronised.

The new rate limiter is also more forgiving, if you set a limit of 1 request per second, then one request per second will always get through, whereas previously the client would need to implement some kind of back-off strategy.

This also means an overall reduction in the amount of Redis traffic coming from the Gateway that ensures that the Gateways scale effectively as CPU cores increase.

And all of that culminates in a really cool, auto-scaling system that can handle a bunch of traffic, efficiently and effectively.

You want to extend Tyk? Have I got some news for you…

So if Tyk were a wrestler, then if she were to use a weapon outside of the ring (as you do), unlike most other wrestlers that are limited to using only one tool, usually a poor spectator’s chair. Tyk – or La Fenix Mayor!, to use her nom-de-souffrance – will use the spectator’s chair, the spectator’s backpack, the spectator’s handkerchief, or even the poor spectator themselves to beat her opponent to a whimpering mess.

(That’s quite a strained metaphor, I’ll admit.)

What we’re saying is that in Tyk v2.3 you can add pre-processing, post-processing, and custom authentication middleware in more than just one host language. With Tyk, you can add plugins using… clears throat, drumroll:

That’s right, if you want to add a custom authentication server to Tyk and you want it to be fast, you can use any of the above to extend how Tyk works.

The new plugins feature means that on top of all the native goodies that come with Tyk out of the box, extending it now fits the needs of your team and not what we (or anyone else) impose on you. It’s all about flexibility and performance, and here, we perform very well.

Performance

While Tyk v2.3 can easily handle 2,000 requests on commodity hardware, when you start introducing plugins and host languages you can see a bit of a performance hit, but that performance hit, in Tyk, is designed to still be plenty fast. In our benchmarking, Python 3 plugins performed at about 1,300 requests per second with sub-100ms latency, while gRPC with a local gRPC host gave us 1,400 requests per second with sub-100ms latency.

This is important, so I’m going to say it again: You can extend Tyk with almost any language you chose, and get blazing-fast performance to boot.

Pre-built plugins

Now we’re not saying “we have all this cool new stuff for you but you need to do it yourself”. Hell no! We always like to add value – just look at all the cool features that come built into Tyk natively. No, we like to go further, to go deeper. With this release, we’ve also been hard at work fashioning some initial plugins that are 1, pretty damned useful, and 2, a great starting point for setting up your own.

Some of the more awesome plugins that we’ve put together are:

  • Loggly integration
  • Datadog integration
  • Bot detection
  • Message Queue integration (AMQP amongst other message queue hand-offs)
  • Webhook sidecar
  • Correlation IDs for request logging

We’ve also generated demo’s to get you started in your favourite languages for:

And we’ll keep going…

Is this the future of Tyk?

In a word: No. Unlike other vendors, we like to pack value into our offering, and we don’t want to take the chicken-sh*t way out and force our customers to maintain their own plugins, or forks of ours in order to add functionality to Tyk. We don’t want you to increase your technical debt by having to maintain customisations to Tyk over huge swathes of time.

So our plan is to continue adding great functionality to the Tyk OSS core, and even migrating popular plugins into the core so that they are available to all “at the speed of Go”, while giving our users the flexibility and capability to make it easy to integrate Tyk with your systems.

In future, we may opt for creating plugins for Tyk in Go itself, at which point things may change, but for now: Tyk is there to make sure that you can get your sh*t done and get it done well.

Other cool stuff

So the above two features are pretty much the biggest, baddest announcements that come with Tyk wading into the ring with v2.3, but we have also been busy fixing up bugs and adding smaller, nice-to-have features to the overall system, I’ve listed a few below because they are pretty cool, and it’s nicer to read them here rather than parsing our changelog:

Environment variable configuration

This is a dull, but very useful change – you can now configure all of the Tyk Gateway, Analytics and Pump settings via environment variables – more precisely – you can override them. This is extremely useful for those of you deploying into Docker orchestration environments such as Kubernetes where files aren’t fun to deploy.

Live Gateway logs in the Dashboard

A lot of the debug information in Tyk comes from the Gateway log (stderr) output, so we’ve made it so that key log operations are also copied through to a live Dashboard view – this way you can quickly check why an API hasn’t loaded or if a system event or error has occurred without reaching for the raw logs from the Gateway hosts.

Custom error templates per response code in XML or JSON

This is a popular request, you can override all of our error templates, per-status-code, in either XML or JSON, this means you can have much more control over how your API Gateway responds to end users when errors occur.

Chained authentication methods

A popular request from the community – having the capability to have HMAC-based message signing in conjunction with a bearer token. With this mechanism, you can actually use many different authentication methods chained together in order to provide maximum security for your applications.

Hot reconfigure and reload of the Tyk Gateway process from the Dashboard

Tyk has always been able to hot-reload from the command line, but we’ve made the process smoother and more efficient, not only can you send a signal to the process to fork and start a new process without dropping any connectionss, but you can also hot reload from the Dashboard and also re-configure the Gateway in the process without having to set foot on the host.

(Naturally, this feature can be disabled!)

A new CLI tool to help you build and deploy plugins

Having so many plugin options means we needed a better way to publish your code to the Gateway layer, and to make that work, we’ve started work on a CLI tool, this tool currently can only do one thing: sign and package up your plugins so that they are cryptographically secure and guaranteed before being deployed (and verified) by the Gateway.

But this project will keep growing, and more and more functionality will be added to the CLI to make it easy to script common Gateway operations.

More logger integrations

Tyk v2.3 now has logger integrations for:

So that you can aggregate your Gateway logs into the system of your choice.

Separate Redis cache

For those using our caching mechanism, in very high availability environments you do not want to have your Redis cache be the same database that is used by Tyk for configuration information. With this update it is possible to completely separate out the cache to a different Redis database or cluster.

More portability: Import/Export API

It is now possible to backup and re-create your Organisations, Policies and APIs using a dedicated import/export API. This will allow you to completely re-generate an installation from backed up assets without worrying about mis-attributed IDs.

LetsEncrypt support

A fun, and currently experimental feature, the Tyk Gateway can now auto-provision SSL certificates for your domain-bound APIs so you do not need to configure them yourself. All certificate information is encrypted and shared across a cluster so that subsequent visits to your Gateways are fast and scaleable.

That’s all folks

Until we meet again – we’re already planning v2.4 and have some very cool stuff in the pipeline for you. As always, get in touch with us on the community forum, or directly via Twitter, to give us feedback or ask any questions.

For those of you on v2.2, we have created some upgrade notes.

Now… back to the ring.

Martin and the team at Tyk Towers in Shoreditch, London.

Download!

Why Do Microservices Need an API Gateway?

Sometimes everything depends on a powerful gateway. Covering security, control and the power of transforms, James Higginbotham explores the ways microservice architectures can benefit from an API Gateway.

GoT

With the growth of API as a product, as well as API-centric IT initiatives, API gateways and management layers are becoming more common place. But, should we consider an API gateway for our microservices as well? If so, what kind of benefits do they offer?

What is an API Gateway?

An API gateway provides a single, unified API entry point across one or more internal APIs. They typically layer rate limiting and security as well. An API management layer, such as Tyk.io, adds additional capabilities such as analytics, monetisation, and lifecycle management.

A microservice-based architecture may have from 10 to 100 or more services. An API gateway can help provide a unified entry point for external consumers, independent of the number and composition of internal microservices.

The Benefits of an API Gateway For Microservices

Prevents exposing internal concerns to external clients. An API gateway separates external public APIs From internal microservice APIs, allowing for microservices to be added and boundaries changed. The result is the ability to refactor and right-size microservices over time, without negatively impacting externally-bound clients. It also hides service discovery and versioning details from the client by providing a single point of entry for all of your microservices.

Adds an additional layer of security to your microservices. API gateways help to prevent malicious attacks by providing an additional layer of protection from attack vectors such as SQL Injection, XML Parser exploits, and denial-of-service (DoS) attacks.

Enables support for mixing communication protocols. While external-facing APIs commonly offer an HTTP or REST-based API, internal microservices may benefit from using different communication protocols. Protocols may include ProtoBuf, AMQP, or perhaps system integration with SOAP, JSON-RPC, or XML-RPC. An API gateway can provide an external, unified REST-based API across these various protocols, allowing teams to choose what best fits the internal architecture.

Decreased microservice complexity. Microservices have common concerns, such as: authorization using API tokens, access control enforcement, and rate limiting. Each of these concerns can add more time to the development of microservices by requiring that each service implement them. An API gateway will remove these concerns from your code, allowing your microservices to focus on the task at hand.

Microservice Mocking and Virtualization. By separating microservice APIs from the external API, you can mock or virtualize your services to validate design requirements or assist in integration testing.

The Drawbacks of an Microservice API Gateway

While there are many benefits to using an API microservice gateway, there are some downsides:

  • Your deployment architecture will require more orchestration and management with the addition of an API gateway
  • Configuration of the routing logic must be managed during deployment, to ensure proper routing from the external API to the proper microservice
  • Unless properly architected for high availability and scale, an API gateway can become a limiting factor and even a single point of failure

Using Tyk For Your Microservice Gateway

Rather than providing an explanation of Tyk’s features as a microservice API gateway, I’ll let Dave Koston, VP Engineering at Help.com explain how they use Tyk:

“We use Tyk as a gateway in front of around 15 services (of varying sizes). We’re also using Tyk Identity Broker to proxy logins to our existing authentication service. Tyk gives us some really great features out of the box like rate limiting, sessions, token policies, and visibility into api traffic.”

In addition, Dave mentioned that Tyk helps them secure their web socket connections, in addition to their API:

“We also have web socket communication that requires authentication and it was easy to simply add some metadata to Tyk sessions and use the Tyk session store (Redis in our case) to authenticate those web socket connections with the same access token that we use for HTTP.”

To learn more about Tyk and how it can provide an API gateway for your microservices, along with API management of your public API, take a look at our product page.

7 Critical Factors For Selecting Your API Management Layer

Remember the final scene of Indiana Jones and the Last Crusade, where the old Knight says: “You must choose, but choose wisely”? Admittedly Indy wasn’t choosing an API Management Solution and certainly had other things on his plate that day. However the adage still applies and to help you choose wisely we’ve got James Higginbotham, who has put together a list of the seven critical factors to consider when choosing your API Management Layer.

choose-wisely-nofwvs

I’m often asked which API management layer is the best one available today. The answer is always, “It depends”. Whether you are considering an open source or closed source API management layer, the number of vendors and options available today are astounding. Many API management solutions focus on delivering specific capabilities, while others strive to cover a breadth of features but don’t go very deep in all areas. This article will shed some light on how to approach the decision making process for managing your API, so that you can ensure the needs of your business, product, and development teams are met.

Why Do You Need API Management?

For those unfamiliar, API management layers accelerate the deployment, monitoring, security, versioning, and sharing of APIs. They are often deployed as a reverse proxy, intercepting all incoming API request traffic and applying rules to determine if requests should be routed to the API. In addition to traffic management, they commonly offer:

  • Token-based authorization support through API-key based authentication and/or OAuth 2
  • Deployment and versioning support for redirecting incoming requests to the current or newly deployed release of an API
  • Rate limiting to reduce the impact of greedy API clients and denial of service (DoS) attacks
  • Developer portals for hosted documentation and self-onboarding by developers
  • Administrative portals for viewing usage reports
  • Billing and payment support for selling subscription-based access to your API
  • On-premise, cloud, and hybrid hosting deployment options

API management layers may be offered as purely closed source, purely open source, or in a hybrid model using a combination of open source components and closed source offerings.

Factor #1: Self-hosted and SaaS deployment options

Your deployment requirements are a huge factor in API management layer selection. While most vendors offer managed cloud-based options, some choose to do so only during the early stages of your API, requiring you to move to an on-premise solution as your traffic increases. Knowing how you need to deploy your API management layer, including the resources available to monitor and maintain it, is important to the selection process. Look for a vendor that offers the kind of deployment you require: on-premise or managed cloud services. If you are unsure, select a vendor that offers a seamless transition from one to the other, such as Tyk.io.

Factor #2: Simple installation process

If your API management layer will reside within your own cloud environment or data center rather than hosted, then installation needs to be simple. Evaluate the installation process to ensure that standing up new instances and new environments (e.g. staging, UAT, integration) will be easy – and preferably automated. If you prefer containerization, consider vendors that offer a container-based distribution to reduce the effort required to support your deployment process.

Factor #3: Meets feature requirements

Part of your selection process should include an evaluation. We covered this in a previous article, but I’ll repeat it here for reference. Your evaluation should include the following considerations:

  • Authorization – can you implement your desired authorization mechanism (e.g. API tokens, keys, OAuth 2, etc) to meet your needs?
  • Performance – how much overhead does the layer require for each request? Measure the performance of your API endpoints before and after installing the API management layer. Expect some reduction in performance, but also ensure that the management layer doesn’t cause a drastic decrease in performance that may require additional server capacity
  • Security – perform some basic penetration testing to verify that the layer is catching common attack vectors. Attacks such as SQL injection, denial of service attack prevention through rate limiting, and other attacks can often be simulated with some simple scripts
  • Onboarding – how easy or hard will it be for your developers to get onboarded? Does the onboarding process support the business, product, and technical needs of your company?
  • Reporting – does the management layer provide the information you will need on a day-to-day basis to better serve your developers? Can you export data via an API or push it into an external reporting solution easily, for integration into other daily/weekly reports?

Factor #4: Customization should not be required

I was recently discussing the abundance of infrastructure tools available to development teams today. With every tool comes the burden of understanding it and getting it integrated into your environment. Some tools choose to offer a variety of options, but require considerable effort to get them running. Be sure to evaluate the effort required to start using the API management layer. Customization options are great, but if you can’t get started easily or without installing lots of plugins, you need to know this ahead of time.

Factor #5: Easy upgrades

Whatever solution you select, you will need to keep it upgraded to ensure you have the latest improvements and available features. Evaluate the upgrade process by reading past release notes to better understand the process that will likely be required. If there are no release or upgrade notes, then that should generate a concern. Just keep in mind that some commercial offerings only supply these details directly to customers or via a customer portal. If you don’t find anything, contact the vendor to ensure that they are available to paying customers.

Factor #6: Vendor viability

We all want API management vendors to experience growth and success. However, not everyone will be around in the long term. Consider the vendor’s viability by understanding their revenue model. For open source solutions, take into consideration the companies backing the solution, along with the community that is supporting it. If there isn’t much activity, then the solution may become abandoned in the future.

Factor #7: Management Automation

Finally, consider the automation options available to configure, manage, and integrate the solution into your operations processes. Vendors that offer APIs for every feature available in their configuration APIs, along with reporting APIs and webhooks for important events ensure that you can easily automate changes and integrate it into your deployment process.

Conclusion

As you have likely realized, it isn’t easy to select an API management layer. However, your decision will have ramifications for months or years to come. It may offer tremendous flexibility or severely limit your options in the future. Take the time to properly evaluate the API management layer that best fits your needs.

Tyk 2.2 is Here!

That’s right, the long awaited version 2.2 release is finally here, we’ve been cleaning up, adding features and most importantly, making Tyk easier to use.

The Gateway

Now features some pretty cool new features:

  • OAuth now has support for the client credentials flow
  • It is now possible to use generative security policies with OAuth clients, that means no more needing to send any session data back to Tyk!
  • XML support for inbound and outbound body transforms
  • Context Variables: Get quick and easy access to request data across a variety of middleware, data such as requester IP, form data, headers, path parts and other meta data, all available to the transform middleware
  • Partitioned Policies: Now you can have policies only affect one part of a token instead of all elements, for those who have very complex sales strategies or quota allocation strategies, this makes it much easier to manage. So instead of having to grant ACL, Quota and Rate Limit rules in one go, you can just grant ACL rules, or Quota rules, or both.
  • Normalised URLs in analytics: With this latest update you can normalise the URLs in your analytics to remove those pesky UUIDs and Numeric IDs for more meaningful data

Websockets

We’ve left this for last, because we’re pretty happy about this, Tyk now transparently supports Websocket proxying as part of your APIs, and these can be protected by the same mechanisms that currently protect your existing APIs, be that Bearer tokens, our Circuit Breaker, or our Load Balancer, the websocket proxy is transparent and will “just work”.

The Dashboard

The dashboard now has a UI for all of the above features, but the biggest change you will find is our new i18n support. That’s right, Tyk Dashboard now has language-pack capability.

To get things started up, we’ve translated the UI into Simplified Chinese and Korean, and have made the language packs available as an open source repository here.

That’s right, it means that it is easy and dynamic to add a new language to Tyk (or to change the wording of the UI if it doesn’t suit you). All of this is configurable and easy to deploy, so have at it.

This update has been a small one for us, because we’re trying to make smaller, more effective releases that help our community and users instead of breaking things. We’ve got some pretty major features in the pipeline coming up, but our real focus will be on stabilising the platform.

As always, Tyk is available on our apt, yum and docker repositories!

Enjoy!

Martin & The Tyk Team.

Tyk v2.1 is out – Now with Open ID Connect, bug fixes and more!

Recently we announced that we had added full support for Open ID Connect to our Cloud platform, and that we were moving it to our next release in due course.

Well, the wait is over – and as of today it is available to everyone! That’s right, Tyk v2.1 and Tyk Dashboard v1.1 are now available.

This release’s main feature is the OIDC support, however we have also made many improvements and bug fixes, all of which can be seen in the Change Log.

To get started with 2.1 you can just upgrade your existing installations, all the deployment methods are supported and v2.1 will be the default installation for all major distribution methods, but before you go off and do that, please back-up your configuration files!

This is our first attempt at making smaller, more regular releases to ensure that upgrades are easier and involve less risk for you, we dog-food every feature and change in our cloud platform before we cut a version for on-premise installation, so you can be sure that we’ve put all our builds through their paces.

Cheers,

Martin & The Tyk Team

Integrating Tyk Open Source API Gateway with a Custom Identity Provider using JSON Web Tokens

That’s quite a mouthful. But hey, you know what a lot of users want to do? Use their own Identity Provider, and the new hotness is JSON Web Tokens. For those who don’t know what they are, they’re pretty friggin’ cool – find out all about ’em here. We’re going to use them to do some cool trickery/magicky API Gateway token gen without even having to generate a token

Magic!

But seriously, it’s pretty cool – in short: you can have a custom Identity Provider (IDP) generate JSON Web Tokens for you, and then use those tokens directly with Tyk, instantly – better yet, the underlying identity of the token (it’s owner) is tracked across JWTs, that means they can log in elsewhere, or via a different app or portal and still get the same rate limits provided (or different ones, it’s all up to your IDP, not us!).

So how does Tyk Magically do this? Well, the process is pretty involved, so we thought we’d talk you through it:

Centralised secrets and virtual tokens

With centralised secrets, we do not create any tokens at all within Tyk, instead all tokens are generated externally. Centralised secrets involves storing a shared secret (HMAC or RSA) at the API Definition level, this secret is then used to validate all inbound requests, and applies access rules based on specific fields that can be added to the claims section of the JWT to manage the underlying identity’s access to managed APIs.

To use this option, we do not generate a token at all, instead we go to the API Designer, and under the JWT Shared secret section (when selecting the JWT security handler), we add the shared secret (recommended is a public key)

First, let’s set things up:

  1. In the API Designer, we select “JSON web Token” as the authentication mode
  2. Select RSA as the signing method
  3. Add a secret (public key)
  4. Set identity source to be “sub” – this tells tyk which claim in the JWT to use as the base “identity” of the JWT, i.e. the bearer (this might be a username, or an email address, or even a user ID), a common JWT claim is “email”, we could use that too
  5. Set the policy field name to be “policy” – this tells tyk which claim in the JWT to use to identify the policy to apply to this identity, as in: the access control, quota and rate limiting settings to apply to this identity
  6. Save this thing, it will now go live onto your gateway

Now lets create an actual policy:

  1. In our Policies section, we create a new policy called “JWT Test”
  2. Set the quota and rate limit rules, most importantly, we grant access to the API we just created

Let’s say when we save this policy, the ID returned is 1234567

So, let’s walk through a user flow:

  1. A user goes to a third-party login portal and authenticates themselves
  2. This third-party system generates a JWT using it’s private key, and in this JWT adds the following claims:
    • policy: 1234567
    • sub: the user’s email address
  3. The user is then granted this JWT (maybe as a fragment in the URL) and uses it to access the API we created earlier
  4. Access is magically granted!

Tyk will now validate this request as follows:

  1. It extracts the JWT from the request and validates it against the stored JWT that we added at the API level in step 1.2
  2. If the token is valid, it looks for the identity source field, since this is configured as “sub”, it finds the user’s email address
  3. It uses the email address to generate a hash of the address, and then creates a new value of {org-id}{hash(sub)} – basically it generates an internal token that is bound to the identity (it will be regenerated each time this sub shows up)
  4. Tyk extracts the policy ID from the claims and retrieves this from memory (if it exists)
  5. Tyk now tries to find this internal token hash in it’s key store – if it exists, it applies the policy to the key (this does not override existing values, it just sets the maximums so that they have an immediate effect if changed), access control rules are overridden too, so they can be changed depending on the access source or IDP doing the logging in
  6. If the internal token does not exist, Tyk creates the token based on the policy data – same as earlier but with a new identity-based token
  7. When the token is created, the internal token ID is added to the metadata of the session as a new field: TykJWTSessionID (This is important, because now you can reference this meta data object in the transforms middleware, for example, you could inject it as a custom header into the outbound request, so your upstream application has access to the original JWT and the internal Tyk Session token in case it needs to invalidate or track a specific user’s behaviour – aren’t we gosh darn helpful). Now since these session IDs do not change, they exist across JWT’s for this identity
  8. This internal token is now used to represent the request throughout the Tyk request chain – so now the access control, rate limiting and quota middleware will act on this token as if it were a real key – but it leaves the actual JWT intact in the request so it can be processed upstream
  9. If the access rules set by the policy referenced in the JWT’s policy field are all valid, the request is allowed to pass through upstream

Phew! That’s a lot of steps, but it means that you can handle access control in your IDP with a single added claim.

Anyway, we thought it was interesting – we hope you did too!

Simpler usage tracking with Token Aliases in Tyk Cloud!

So you might have noticed that this week we needed to do a little tinkering with our servers – thanks for your patience while we sorted all that out. We’re growing so quickly that we’re constantly pushing new features. One that went out today that we’re particularly happy with is Token Aliases.

What the heck is token Alias?

As you might know, Tyk will hash all keys when they get created so that they are obfuscated should your DB be breached, this creates a unique problem – how do you identify the tokens in your logs? That’s what Aliases aim to solve.

An Alias can be set on any token, and when the token access your APIs, the alias will be stored alongside the token ID in your analytics and displayed in your dashboard.

What’s more, when a developer generates a token via your API Developer portal, Tyk will auto-assign their email address as the alias to their token so that you can track their activity more easily.

(P.S. If you’re an on-prem user, this feature will be available in the nightlies tomorrow!)

Really simple, and really useful – enjoy Tykers!

Tyk Nightlies for Nighthawks

The Tyk Open Source API Gateway platform consists of quite a few modules: The Tyk Open Source API Gateway, the Tyk API Management Dashboard, Tyk API Analytics Pump, Tyk Identity Broker and more.

For those more adventurous types out there that have grabbed our open source API Gateway repository and compiled Tyk yourselves, you’ve been some of the few that have had access to the latest and greatest features. However, even you brave souls still miss out on getting all those awesome features delivered to your API Dashboard, waiting on us schlubs to release a new version so you can get all the juicy API Management goodness.

Well, the wait is over! We’re really quite happy to introduce the Tyk Nightly builds, which – basically – are what they say on the tin. We now build the development branch of Tyk, Tyk Dashboard and Tyk Pump every night, and deposit them in our file repository in AWS S3 for you to download and enjoy.

Now it goes without saying that these builds are not guaranteed and that using them puts you in danger of being eaten by a swarm of flying zombie sparrow-hawks, but for those truly brave, and truly wishing for the latest and greatest in open source API Gateway goodness, this is for you.

The files (are all here)[/docs/tyk-api-gateway-v-2-0/installation-options-setup/nightly-builds/] – go nuts.

Tyk’s Big Sister Has Landed: Tyk 2.0

It’s spring, spring is the season of new life, new beginnings, birth.

Well, we’ve been busy little midwives, and Tyk 2.0, Tyk’s big-ass big-sister, is here. She arrived sowing a trail of highly-organised, efficient and carefully designed API gospel, swarmed by the cluster of little Tyk* in her wake, parked in front of the brand-spanking new Tyk Towers in London and promptly sat on the dog.

The dog is now smaller, but happy.

That’s right, We’re back with a spring clean of the site and a brand new version of Tyk.

Some new names

Names are important, and we want to remove confusion from Tyk, so we’ve made some changes to reflect the different elements of the Tyk ecosystem.

Tyk API Gateway is now Tyk Community Edition: our community has been growing rapidly since we opened our new forum, and we’ve been getting increasing interest, pull requests and contributions from a wide segment of the Tyk developer community – a big fat thank you for that, it’s the Tyk community that makes it what it is, and makes it so damned awesome. So we named Tyk after you, since you’re the ones that drive it.

Now, if you’re a Tyk Cloud user, then you’ll already have seen some of the 2.0 features crop up in your UI, since we’ve been running v2.0 on Cloud for some months now.

New features

So, brass-tax, what’s new, “what the heck have you made us wait for all these months”, we hear you say… well, the highlights for Tyk Community Edition:

  • Major performance improvements
  • GeoIP Analytics
  • Detail logging: Log requests and response data for easy tracing, this can be done dynamically using an ORG key for easy debugging
  • Full JWT support: the best way to integrate with other service providers
  • Completely rewritten HMAC support, now fully compatible with the latest spec
  • Multi-target support for versioned APIs
  • Improved stability & failure handling
  • Improved logging for better traceability
  • New, pluggable data sinks with Tyk Pump:
    • MongoDB
    • CSV
    • ElastiCache
  • Third party IDM support for LDAP, Social, Basic Auth & Proxy
  • Mesosphere support for service discovery

Of course there’s loads more over in the changelog.

Tyk Professional Edition

And what about the dashboard? Well, the dashboard is now Tyk Professional Edition, there is still a free edition, but it now has a paid for license option for larger deployments too. With Tyk 2.0, we bring Dashboard 1.0, which has some important and amazing new features:

  • New analytics views on a per API, per country, per tag basis with drill-down
  • New Geographic traffic charting – see where your developers and users are
  • New detailed log traces for debugging APIs
  • Realtime notifications of key team and cluster actions
  • IDM Integration management from the dashboard
  • Node status overview: what’s up, what’s down
  • Aggregate developer usage data in profiles
  • Granular user permissions & API client permissions for integrations
  • Faster DB and analytics access throughout
  • Improved analytics speed and caching
  • Improved Open API definition support in the portal
  • Better, more usable UI

Wait a sec, Tyk Dashboard can cost money?

Tyk Pro is still free for single node deployments (managing a single Tyk API Gateway from the Dashboard), but for anything larger, there is a license fee attached.

Tyk Pro not only gives you the dashboard, but each paid license also includes our Premium Support package, so a license purchase also gets you one-to-one engineering support right from the get-go.

For existing users, we are not removing old versions of Tyk Dashboard, those will still be available for download and installation – and the documentation is still available on this site, but they will not work with the latest version of the gateway.

Tyk for Enterprise

We weren’t kidding when we said we’ve been busy, and that Tyk 2.0 is a big-momma-bear API Management platform. For Enterprise users, we’ve been busy doing PoC’s, signing SLA’s and consulting with big giganto-corps, and we’ve been busy taking notes, studying and learning, and in response, Tyk for Enterprise is a new beast.

Tyk for Enterprise not only offers our incredible SLA’s, but also comes with our new multi-data-center bridge component, enabling large enterprises to run isolated, zoned Tyk Clusters that are managed centrally, with a clear, easy way to move APIs between environments, between clusters, between zones, and between geographic locations. We call it the Tyk Multi Data Center Bridge, it powers Tyk Cloud Hybrid, and it’s awesome.

Someone won the friggin’ Apple Watch

Some of you might have remembered our survey where we asked what you wanted, and we’ve been listening, we got some amazing feedback, and we’ve used it to drive our offering for large and smaller clients and much of it has gone into this latest, biggest, baddest, release.

The winner of the prize-draw for the watch was Diego Guario of Milan, Italy. We’ve never met Diego, but we know that he uses Tyk, which means he must be a smart, suave and sophisticated type of guy. Congratulations Diego!

So, to round things off – go forth, try Tyk Community Edition, get yourselves a Free Tyk Pro license, get involved, get busy, go get your Tyk on!

— Your’s truly,

Martin & the team at Tyk Towers

Image Credit: F4LL3N-0N3

Creating an IP-based rate-limiter with Tyk and JavaScript middleware

We’ve recently had a user ask if it was possible to have Tyk act as a rate limiter based on IP address instead of by token, this isn’t possible out of the box, but it is with our Middleware API and some simple JS So, you may ask yourself how we achieve this feat?

It’s actually really straightforward. First off, we’ll need to set up our API in a very specific way, we’ll use a file-based definition here as we’re assuming this is a simple deployment of Tyk. The way we will do this is by creating a custom JS middleware function that will run before any other Tyk processing takes place, this includes all rate limiting. The middleware will:

  1. Catch the request
  2. Extract the IP address from the header
  3. Create a key based on the IP address with a specific rate limit
  4. Inject the IP address as the authorization header for Tyk to use

This means that rate limits are on a per-IP basis, which is great, and also means that Tyk can just work through the request normally once the middleware processing is done.

We will start off by defining our API appropriately, we basically want an API that will invoke the middleware as a pre-processor, and then also ensure that any keys that are created do not last forever. The Tyk API Definition will look something like this:

    {
    "name": "Tyk Test API",
    "slug": "test-name",
    "api_id": "ip-ratelimit-api",
    "org_id": "1",
    "use_keyless": false,
    "use_oauth2": false,
    "auth": {
        "auth_header_name": "x-tyk-authorization"
    },
    "definition": {
        "location": "header",
        "key": "version"
    },
    "version_data": {
        not_versioned: true,
        versions: {
            "Default": {
                "name": "Default",
                "expires": "3011-02-02 00:00",
                "use_extended_paths": true,
                "extended_paths": {}
            }
        }
    },
    "proxy": {
        "listen_path": "/b605a6f03cc14f8b74665452c263bf19/",
        "target_url": "http://httpbin.org",
        "strip_listen_path": true
    },
    "custom_middleware": {
        "pre": [
            {
                "name": "ipRateLimiter",
                "path": "middleware/ipRateLimiter.js",
                "require_session": false
            }
        ]
    },
    "session_lifetime": 172800,
    "active": true
}

The key things of note here are the `custom_middleware` section, here we’ve defined where to find our files and what the objects are called. Next up is the `session_lifetime` value, this sets a default key lifetime for every key created on this API, in this case we’ve set it to 48 hours. Now that we are ready with the actual API, we can set up the middleware, it’s actually quite short – this is the entire IP rate limiter:

    // ---- A Middleware based IP Rate limiter -----
var ipRateLimiter = new TykJS.TykMiddleware.NewMiddleware({});

ipRateLimiter.NewProcessRequest(function(request) {
    // Get the IP address
    var thisIP = request.Headers["X-Real-Ip"][0];

    // Set auth header
    request.SetHeaders["x-tyk-authorization"] = thisIP;

    var keyDetails = {
        "allowance": 100,
        "rate": 100,
        "per": 1,
        "expires": 0,
        "quota_max": 100,
        "quota_renews": 1406121006,
        "quota_remaining": 100,
        "quota_renewal_rate": 60,
        "access_rights": {
            "ip-ratelimit-api": {
                "api_name": "Test API",
                "api_id": "ip-ratelimit-api",
                "versions": [
                    "Default"
                ]
            }
        },
        "org_id": "53ac07777cbb8c2d53000002"
    }

    TykSetKeyData(thisIP, JSON.stringify(keyDetails), 1)

    return ipRateLimiter.ReturnData(request);
});

// Ensure init with a post-declaration log message
log("IP rate limiter JS initialised");

The above code should go into a file called `ipRateLimiter.js` in the `middleware/` folder of your Tyk installation. If you start Tyk now, the IP rate limiter will be in place.

What is actually going on here? It’s exactly as we said above, we extract the IP address, here we are assuming that `”X-Real-Ip”` is a real header, since we are using NGinX to manage our domain, we have set a rule in the server block to make sure that the IP address is included as a header on each request:

location / {
    rewrite /(.*) /b605a6f03cc14f8b74665452c263bf19/$1 break;

    proxy_pass_header Server;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Scheme $scheme;
    proxy_pass http://tyk;
} 

The thing of note here is `proxy_set_header X-Real-IP $remote_addr;`, this is important, as otherwise you won’t have access tot he IP address from the header. Once we have the IP address, we set this as the authorisation header for the request.

In the next section we have a template for a session object, here we are setting the rate limits manually, we could also use a policy file to do this and just attach the policy ID.

In this case we have set the rate limit to 100 requests per second with an overall quota of 100 per minute. We now use the built-in method `TykSetKeyData` to create the key, notice the last input to this function is `1`, this forces Tyk to not reset the quota for this transaction, which is the default behaviour – we don’t want Tyk re-setting the quotas and rate limiters every time the IP shows up, we just want to ensure the key is present.

When the middleware completes, Tyk processes the now-modified request as if it had an authorisation token (in this case the IP address) and the rate limits are enforced. And that’s it, this key will last for 48 hours, at which point it will be re-created. This means that your Redis instance won’t get flooded with infinite IP addresses as traffic grows. Enjoy!

Scroll to top