OWASP API security – 4: Lack of resources & rate limiting

 

Introduction

1: Broken object level authorisation

2: Broken user authentication

3: Excessive data exposure

4: Lack of resources & rate limiting

5: Broken function level Authorization

6: Mass assignment

7: Security misconfiguration

8: Injection

9: Improper assets management

10: Insufficient logging & monitoring


 

APIs can become overwhelmed if the resources upon which they rely are fully consumed. This is referred to by OWASP as lack of resources & rate limiting. In such situations, an API can no longer operate, and will no longer be able to service requests, or potentially even be unable to complete those currently in progress.

APIs which don’t have adequate restrictions in place can be overwhelmed by legitimate requests, as well as those which originate from malicious actors performing Denial of Service (DoS) attacks. Whatever the origin, a sudden surge in requests, or even a small number of errant requests can cause problems for the API server.

OWASP summary

Threat agents/attack vectors Security weakness Impacts
API specific : Exploitability 2 Prevalence 3 : Detectability 3 Technical 2 : Business specific
Exploitation requires simple API requests. No authentication is required. Multiple concurrent requests can be performed from a single local computer or by using cloud computing resources. It’s common to find APIs that do not implement rate limiting or APIs where limits are not properly set. Exploitation may lead to DoS, making the API unresponsive or even unavailable.

Source: OWASP lack of resources & rate limiting

APIM context

API Gateways typically sit close to the edge of API infrastructure, which puts them in a good position to provide protection against requests that may cause adverse effect on the APIs availability:

  • Execution timeout: Requests which exceed a certain amount of time are likely candidates for high resource usage too. Such requests can be terminated by the Gateway, allowing the API server to free the resources being consumed by the request.
  • Payload size: Sending large payloads is a common approach to DoS attacks. It forces the server to allocate large amounts of resources to deal with the request, potentially leading to the server being unable to service any more requests. API gateways can protect against oversized requests by rejecting those which exceed a particular size, preventing them from reaching the API server.
  • Rate limiting: Rate limiting is a core traffic control feature of an API gateway. It can be used to control individual consumers as well as larger groups:
    1. Individual: Individual rate limiting is useful against a particular client which exceeds their allowance.
    2. Global: Global rate limiting places a limit on the total number of requests the API Gateway will allow through to that API over a particular period. This is an aggregated approach, designed to enforce a maximum capacity on the number of requests an API server will receive over a period. The problem with this approach is that it affects all consumers of the API, no matter their own level of consumption.
    3. Throttling: Throttling is a subset of rate limiting, whereby requests are automatically retried by the Gateway after a period of time. This can assist API clients as, if the retry is successful, they will not be aware that their request activated the rate limit, only that the request took longer.
    4. Quotas: Quotas restrict clients to a set number of requests over a given period. Each request sent by the client reduces their quota, and when this reaches zero any further requests will be blocked until the quota can be renewed. Quotas usually operate over much longer periods than rate limits, typically in the order of days, weeks or months.

 

  • Response caching: Caching responses can drastically reduce the amount of load experienced by the API server, especially for APIs whose data changes infrequently and is applicable to a wide range of consumers. When caching is active, any requests for which a response has already been fetched by the API Gateway can be fulfilled by the Gateway cache, preventing the need to request the response from the API server.
  • Circuit breaker: A circuit breaker can be used to detect when the API server is failing, and prevent further requests from being sent to it. In the context of resource management and rate limiting, this is an action of last resort, since the server is already experiencing problems. If a server is returning HTTP 500 responses, then the API Gateway can block further requests from reaching the server for a particular period of time, giving the API server the chance to recover.
  • IP restriction: Blocking IPs is a rudimentary form of access control. It’s implemented by creating a list of IP addresses which the Gateway checks against the client IP when it receives a request. The lists can function as allow or deny. In the case of allow, the client’s IP must exist in the list for it to be granted access. It’s the opposite in the case of deny, the client’s IP must not exist in the list for it to be granted access. This means these two approaches are mutually exclusive.
    Unlike rate limiting, IP restrictions are typically a manually configured feature, making this approach unsuitable for scenarios which require a dynamic reaction.
  • Complexity limiting: Deeply nested queries are a threat specific to GraphQL APIs. They’re caused by queries which repeatedly query two or more objects which reference each other, resulting in excessive resource usage on the server. To prevent this, API Gateways can be configured to reject requests containing queries which exceed a maximum depth.

All of these measures are helpful in controlling resource usage, especially against individual consumers, and should be considered for APIs which want to prioritise performance and availability. However, they will likely be insufficient to mitigate a DoS attack. This is because the large volumes of requests sent during such an attack will probably overwhelm the API Gateway, especially when the attack is distributed across many clients (DDoS). In this situation, even if the APIs are configured to use a global rate limiter, the enforcement of the limit will affect all consumers, drastically affecting the APIs availability.

A better approach for handling DoS, and certainly DDoS, is to utilise infrastructure and services built specifically for this purpose. Providers such as Cloudflare offer DDoS mitigation as a service, making it simple to implement. Their systems have the necessary capacity to deal with the enormous volumes of traffic which characterise these types of attack, and also have the benefit of being hosted in separate infrastructure, which reduces the impact on the infrastructure hosting the API and API Gateway. This approach is recommended by Stack Overflow in their blog of learnings taken from dealing with DDoS attacks.

Tyk approach

As an APIM product, Tyk’s API Gateway supports many of the approaches outlined above. The Gateway can be configured to use the following out-of-the-box functionality when handling API traffic:

These plugins can vary in configuration across APIs and endpoints, enabling Tyk to support a wide range of scenarios. However, while the approaches outlined above work well for controlling legitimate API users, for DoS type attacks it’s recommended to use 3rd party services which are built to handle such threats.