Platform engineering is taking off around the globe, shaping the API landscape in exciting and ever more efficient new ways as it does so. But this is not at the expense of tried and tested practices for successful API management. Instead, platform engineering teams are baking these successful practices into the internal developer platforms they create and evolve.
We touched on this idea briefly when considering the role of CI/CD and platform engineering for robust software ecosystems recently. Today, it’s time to shine a spotlight on integrating another core element of successful AP management into platform engineering: rate limiting techniques.
Why? So that your internal developer platform can deliver a reliable and consistent user experience that is both secure and scalable. Read on to discover how.
What is rate limiting?
Anyone who works with APIs should be familiar with the concept of rate limiting. It is a practical method of limiting network traffic by capping the number of API calls a client can make per second (or minute or hour). It can be applied as an API-level global rate limit, a key-level global rate limit or a key-level per-API rate limit.
Rate limiting is a valuable tool for reducing server load and thus supporting reliable performance. It’s also handy for API providers with tiered pricing structures, where different tiers deliver different usage levels.
On the cybersecurity front, rate limiting is also essential in defending against hackers and malicious bots, with API-level global rate limits preventing APIs from being overwhelmed in the event of distributed denial of service (DDoS) attacks.
What does it have to do with platform engineering?
Platform engineering is all about efficiency – it’s one of the reasons that platform engineering (when done well) has the potential to deliver an enhanced developer experience. Rate limiting is also about efficiency in terms of resource allocation and management, so the concept dovetails nicely with what internal developer platforms are designed to deliver.
Rate limiting is also about scalability (through the graceful handling of traffic), security (defending against brute force and DDoS attacks) and service quality (by maintaining reliable performance). Platform engineering also serves these goals, enabling developers to create and innovate in a scalable, secure and high-quality way.
Integrating rate limiting into platform engineering
There are various approaches that you can use when implementing API rate limiting. This is where it becomes useful to integrate rate limiting techniques into platform engineering practices. Doing so lays a strong foundation for striking the right balance between user experience and system stability. This is particularly important as usage of your platform grows.
With scalability, efficiency, reliability and quality all firmly in mind, let’s look at some rate limiting integration techniques and tips for your internal developer platform practices.
First up, it’s vital to identify critical endpoints. Which APIs are essential to the functionality of your platform? And where are heavy traffic and/or security vulnerabilities likely to be a concern? These are the APIs on which to focus when it comes to rate limiting.
Rate limiting algorithms
It’s also important to choose the right algorithm. Rate limiting algorithms are based around different strategies for different use cases. Some of the most common include:
- Leaky bucket – first come, first served, with a queue of items for processing at a regular rate.
- Token bucket – a fixed number of tokens (representing network requests) that can be consumed over time, with tokens added at a constant rate.
- Fixed window – permitting a set number of requests in a given period.
- Moving/sliding window – like a fixed window but using a sliding timescale to prevent significant demand spikes each time the window opens.
- Sliding log – users have a total rate limit linked to their logs, timestamped to calculate usage.
The algorithm you choose should be driven by the particular needs of your platform and its users.
As with pretty much any system, monitoring and analytics are important regarding rate limiting and platform engineering. This allows you to track the effectiveness of your rate limiting techniques and spot any potential bottlenecks or other issues early so that you can address them proactively.
Any platform engineer worth their salt will also understand the role of clear communication with platform users around rate limiting. This means covering rate limiting in your platform documentation and error responses or API response headers.
Rate limiting, platform engineering and Tyk
Centralising your approach to rate limiting means you can take a bird’s-eye view of everything from configuration to monitoring and analytics – hence the value of using an API management solution such as Tyk.
Taking a centralised approach to rate integrating limiting techniques into platform engineering practices isn’t just about easy implementation. It also means that you can swiftly and effortlessly adjust to changing traffic patterns as time passes and the use of your platform evolves. This makes it a core component in ensuring the delivery of an efficient, secure, scalable platform on which your developers can rely.
Want to know more about building a platform strategy your teams want in on? The Tyk team is always here to help, so drop us a line if you have questions or fancy a technical natter.