By Ahmet Soormally with Carol Cheung.
API gateways are not just for Kubernetes (far from it!), but they work beautifully when it comes to network traffic security and centralised management. In a Kubernetes context, there are various API gateway architectures or deployment topologies you can use. Different vendors have different offerings. Gateways can be made available as hardware appliances, deployed to virtual machines, and even as a pure SaaS, so you don’t need to manage infrastructure.
With that in mind, let’s look at some deployment options.
Shared gateway
In the shared gateway model, a single centralised gateway (or cluster of gateways) is deployed to a common namespace such as “ingress”. The gateways may be deployed as a demonset (one per node) or as a deployment, which can scale as required.
The API gateway is then used to serve many services, potentially across many namespaces. It’s up to you how to group your namespaces; grouping by application domain or team is common.
This is the most common deployment topology when the API gateway is used for API productisation or composition. It provides a single point of entry, hides the implementation of underlying microservices and enables you to create rate limits, quotas and even expose tailored API products for different audiences.
You could also apply the backend for frontend (BFF) pattern by exposing different subsets of individual microservices. This can better support different API consumers, such as web browsers, IoT devices or mobile devices.
Gateway per service
One of the challenges with the shared gateway approach is that the ingress gateway is often difficult to tune for specific workloads. Inbound traffic to one service might overwhelm the gateway and negatively impact consumers of another service.
Another challenge is that different workloads might require differing levels of security and governance. Payment Card Industry (PCI) or Health Insurance Portability and Accountability Act (HIPAA) traffic might need to be isolated from regular publicly available web traffic, for example.
To solve these problems, it makes sense in some cases to deploy clusters of gateways dedicated to specific services. A further benefit of this approach is that the gateways can independently scale and fail.
In Tyk’s experience, clients typically mix and match the shared gateway approach with the gateway per service approach.
Gateway sharding
Sharding allows different gateway instances to selectively load different sets of APIs despite having a common control plane.
For simplicity’s sake, let’s assume you have two zones in your network – DMZ and an internal zone. You deploy a gateway cluster to the DMZ and another to the internal zone and shard the gateways appropriately. Now, you can dynamically make routes to all services available to the gateway in the internal namespace but perhaps restrict the APIs made available within the DMZ.
Gateway API
The newly launched Kubernetes Gateway API provides an extensible, role-oriented, protocol-aware configuration mechanism for making network services available. It’s a collaborative project to develop a common API to model networking inside Kubernetes.
If you’re familiar with the older Ingress API, you can think of the Gateway API as a more expressive, next-generation version of that. Rather than configuring ingress using various annotations, you have a set of strongly typed Kubernetes resources to help. There are three main kinds to be aware of: GatewayClass, Gateway and HTTPRoute:
- GatewayClass describes the kind of controller that is responsible for a given gateway. It could be regarded as a template for Gateway deployments.
- Gateway describes an instance of an API gateway, binding listeners to a set of IP addresses.
- HTTPRoute – as the name implies – provides a way to route HTTP requests. This includes the capability to match requests by hostname, path, header or even query parameter. You can also define backends here to tell the gateway which backend to route traffic to.
What’s really powerful with the Gateway API is that it has been designed to be role-oriented. The API is separated by the responsibilities of different system users. Infrastructure providers will be concerned with the GatewayClass, cluster operators willinteract with Gateway objects, and app developers can work with HTTPRoute objects.
Tyk and Kubernetes
Tyk and Kubernetes play nicely together, whichever deployment pattern best suits your needs. If you’re ready to describe your entire API system declaratively, you can use Tyk’s Kubernetes Operator to bring GitOps practices to API management processes.